note_id
stringlengths
9
12
forum_id
stringlengths
9
13
invitation
stringlengths
40
95
content
stringlengths
44
35k
type
stringclasses
1 value
year
stringclasses
7 values
venue
stringclasses
171 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
4
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
forum_url
stringlengths
41
45
pdf_url
stringlengths
39
43
review_url
stringlengths
58
64
raw_ocr_text
stringlengths
4
631k
HJoIDBvEg
SyCSsUDee
ICLR.cc/2017/conference/-/paper44/official/review
{"title": "", "rating": "4: Ok but not good enough - rejection", "review": "The paper introduces supervised deep learning with layer-wise reconstruction loss (in addition to the supervised loss) and class-conditional semantic additive noise for better representation learning. Total correlation measure and additional insights from auto-encoder are used to derive layer-wise reconstruction loss and is further combined with supervised loss. When combining with supervised loss the class-conditional additive noise model is proposed, which showed consistent improvement over the baseline model. Experiments on MNIST and CIFAR-10 datasets while changing the number of training examples per class are done extensively.\n\nThe derivation of Equation (3) from total correlation is hacky. Moreover, assuming graphical model between X, Y and Z, it should be more carefully derived to estimate H(X|Z) and H(Z|Y). The current proposal, encoding Z and Y from X and decoding from encoded representation is not really well justified.\n\nIs \\sigma in Equation 8 trainable parameter or hyperparameter? If it is trainable how it is trained? If it is not, how are they set? Does j correspond to one of the class? The proposed feature augmentation sounds like simply adding gaussian noise to the pre-softmax neurons. That being said, the proposed method is not different from gaussian dropout (Wang and Manning, ICML 2013) but applied on different layers. In addition, there is a missing reference (DisturbLabel: Regularizing CNN on the Loss Layer, CVPR 2016) that applied synthetic noise process on the loss layer.\n\nExperiments should be done for multiple times with different random subsets and authors should provide mean and standard error. Overall, I believe the proposed method is not very well justified and has limited novelty. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semantic Noise Modeling for Better Representation Learning
["Hyo-Eun Kim", "Sangheum Hwang", "Kyunghyun Cho"]
Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=SyCSsUDee
https://openreview.net/pdf?id=SyCSsUDee
https://openreview.net/forum?id=SyCSsUDee&noteId=HJoIDBvEg
Under review as a conference paper at ICLR 2017SEMANTIC NOISE MODELING FORBETTER REPRESENTATION LEARNINGHyo-Eun Kimand Sangheum HwangLunit Inc.Seoul, South Koreafhekim, shwang g@lunit.ioKyunghyun ChoCourant Institute of Mathematical Sciences and Centre for Data ScienceNew York UniversityNew York, NY 10012, USAkyunghyun.cho@nyu.eduABSTRACTLatent representation learned from multi-layered neural networks via hierarchicalfeature abstraction enables recent success of deep learning. Under the deep learn-ing framework, generalization performance highly depends on the learned latentrepresentation. In this work, we propose a novel latent space modeling method tolearn better latent representation. We designed a neural network model based onthe assumption that good base representation for supervised tasks can be attainedby maximizing the sum of hierarchical mutual informations between the input,latent, and output variables. From this base model, we introduce a semantic noisemodeling method which enables semantic perturbation on the latent space to en-hance the representational power of learned latent feature. During training, latentvector representation can be stochastically perturbed by a modeled additive noisewhile preserving its original semantics. It implicitly brings the effect of semanticaugmentation on the latent space. The proposed model can be easily learned byback-propagation with common gradient-based optimization algorithms. Experi-mental results show that the proposed method helps to achieve performance ben-efits against various previous approaches. We also provide the empirical analysesfor the proposed latent space modeling method including t-SNE visualization.1 I NTRODUCTIONEnhancing the generalization performance against unseen data given some sample data is the mainobjective in machine learning. Under that point of view, deep learning has been achieved manybreakthroughs in several domains such as computer vision (Krizhevsky et al., 2012; Simonyan &Zisserman, 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bah-danau et al., 2015), and speech recognition (Hinton et al., 2012; Graves et al., 2013). Deep learningis basically realized on deep layered neural network architecture, and it learns appropriate task-specific latent representation based on given training data. Better latent representation learned fromtraining data results in better generalization over the future unseen data. Representation learningor latent space modeling becomes one of the key research topics in deep learning. During the pastdecade, researchers focused on unsupervised representation learning and achieved several remark-able landmarks on deep learning history (Vincent et al., 2010; Hinton et al., 2006; Salakhutdinov &Hinton, 2009). In terms of utilizing good base features for supervised learning, the base representa-tion learned from unsupervised learning can be a good solution for supervised tasks (Bengio et al.,2007; Masci et al., 2011).The definition of ‘good’ representation is, however, different according to target tasks. In unsuper-vised learning, a model is learned from unlabelled examples. Its main objective is to build a modelCorresponding author1Under review as a conference paper at ICLR 2017to estimate true data distribution given examples available for training, so the learned latent rep-resentation normally includes broadly-informative components of the raw input data (e.g., mutualinformation between the input and the latent variable can be maximized for this objective). In su-pervised learning, however, a model is learned from labelled examples. In the case of classification,a supervised model learns to discriminate input data in terms of the target task using correspond-ing labels. Latent representation is therefore obtained to maximize the performance on the targetsupervised tasks.Since the meaning of good representations vary according to target tasks (unsupervised or super-vised), pre-trained features from the unsupervised model are not be guaranteed to be useful forsubsequent supervised tasks. Instead of the two stage learning strategy (unsupervised pre-trainingfollowed by supervised fine-tuning), several works focused on a joint learning model which opti-mizes unsupervised and supervised objectives concurrently, resulting in better generalization per-formance (Goodfellow et al., 2013; Larochelle & Bengio, 2008a; Rasmus et al., 2015; Zhao et al.,2015; Zhang et al., 2016; Cho & Chen, 2014).In this work, we propose a novel latent space modeling method for supervised learning as an exten-sion of the joint learning approach. We define a good latent representation of standard feed-forwardneural networks under the basis of information theory. Then, we introduce a semantic noise model-ingmethod in order to enhance the generalization performance. The proposed method stochasticallyperturbs the latent representation of a training example by injecting a modeled semantic additivenoise. Since the additive noise is randomly sampled from a pre-defined probability distribution ev-ery training iteration, different latent vectors from a single training example can be fully utilizedduring training. The multiple different latent vectors produced from a single training example aresemantically similar under the proposed latent space modeling method, so we can expect semanticaugmentation effect on the latent space.Experiments are performed on two datasets; MNIST and CIFAR-10. The proposed model results inbetter classification performance compared to previous approaches through notable generalizationeffect (stochastically perturbed training examples well cover the distribution of unseen data).2 M ETHODOLOGYThe proposed method starts from the existing joint learning viewpoint. This section first explainsthe process of obtaining a good base representation for supervised learning which is the basis of theproposed latent space modeling method. And then, we will describe how the proposed semanticnoise modeling method perturbs the latent space while maintaining the original semantics.2.1 B ASE JOINT LEARNING MODELIn a traditional feed-forward neural network model (Figure 1(a)), output Yof input data Xis com-pared with its true label, and the error is propagated backward from top to bottom, which implicitlylearns a task-specific latent representation Zof the input X. As an extension of a joint learningapproach, an objective to be optimized can be described in general as below (Larochelle & Bengio,2008b):minLunsup +Lsup (1)whereLunsup andLsupare respectively an unsupervised loss and a supervised loss, and andare model parameters to be optimized during training and a loss weighting coefficient, respectively.In terms of modeling Lunsup in Eq. (1), we assume that good latent representation Zis attainedby maximizing the sum of hierarchical mutual informations between the input, latent, and outputvariables; i.e. the sum of the mutual information between the input Xand theZand the mutualinformation between the Zand the output Y. Each mutual information is decomposed into anentropy and a conditional entropy terms, so the sum of hierarchical mutual informations is expressedas follows:I(X;Z) +I(Z;Y) =H(X)H(XjZ) +H(Z)H(ZjY) (2)2Under review as a conference paper at ICLR 2017X Z YX Z YXR ZRX Z YXR ZR ZP YP(a) (b) (c) Figure 1: (a) Standard feed-forward neural network model, (b) feed-forward neural network modelwith reconstruction paths, and (c) feed-forward neural network model with reconstruction andstochastic perturbation paths.where I(;)is the mutual information between random variables, and H()andH(j)are the entropyand the conditional entropy of random variables, respectively. Note that the sum of those mutualinformations becomes equivalent to the total correlation of X,Z, andYunder the graphical structureof the general feed-forward model described in Figure 1(a); P(X;Z;Y ) =P(YjZ)P(ZjX)P(X).The total correlation is equal to the sum of all pairwise mutual informations (Watanabe, 1960).Our objective is to find the model parameters which maximize I(X;Z) +I(Z;Y). Since H(X)andH(Z)are non-negative, and H(X)is constant in this case, the lower bound on I(X;Z) +I(Z;Y)can be reduced to1:I(X;Z) +I(Z;Y)H(XjZ)H(ZjY): (3)It is known that maximizing H(XjZ)can be formulated as minimizing the reconstruction errorbetween the input x(i)(i-th example sampled from X) and its reconstruction x(i)Runder the generalaudo-encoder framework (Vincent et al., 2010). Since H(XjZ) +H(ZjY)is proportional to thesum of reconstruction errors of x(i)(with its reconstruction x(i)R) andz(i)(with its reconstructionz(i)R), the target objective can be expressed as follows (refer to Appendix (A1) for the details ofmathematical derivations):minXiLrec(x(i);x(i)R) +Lrec(z(i);z(i)R) (4)whereLrecis a reconstruction loss.Figure 1(b) shows the target model obtained from the assumption that good latent representation Zcan be obtained by maximizing the sum of hierarchical mutual informations. Given an input samplex, feed-forward vectors and their reconstructions are attained deterministically by:z=f1(x)y=f2(f1(x))xR=g01(z) =g01(f1(x))zR=g02(y) =g02(f2(f1(x)):(5)1Although H(Z)is an upper bound of H(ZjY),H(Z)is anyway affected by the process of H(ZjY)beingminimized in Eq. (3). In Section 4, we experimentally show that we can obtain good base model even from therelatively loose lower bound defined in Eq. (3).3Under review as a conference paper at ICLR 2017Given a set of training pairs ( x(i),t(i)) wherex(i)andt(i)are thei-th input example and its label,target objective in Eq. (1) under the model described in Figure 1(b) can be organized as below (withreal-valued input samples, L2 loss LL2is a proper choice for the reconstruction loss Lrec):min:f1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R)+LNLL(y(i);t(i)) (6)whereLNLL is a negative log-likelihood loss for the target supervised task. Note that Eq. (6)represents the ‘ proposed-base ’ in our experiment (see Section 4.3).2.2 S EMANTIC NOISE MODELINGBased on the architecture shown in Figure 1(b) with the target objective in Eq. (6), we conjecturethat stochastic perturbation on the latent space during training helps to achieve better generalizationperformance for supervised tasks. Figure 1(c) shows this strategy which integrates the stochasticperturbation process during training. Suppose that ZPis a perturbed version of Z, andYPis anoutput which is feed-forwarded from ZP. Given a latent vector z=f1(x)from an input sample x,z0=z+zeand^y=f2(z0) (7)wherez0and^yare a perturbed latent vector and its output respectively, and zeis an additive noiseused in the perturbation process of z. Based on the architecture shown in Figure 1(c), target objectivecan be modified as:min:f1;01;2;02gXi1LL2(x(i);x(i)R) +LL2(z(i);z(i)R)+2LNLL(y(i);t(i)) +LNLL(^y(i);t(i)):(8)Using random additive noise directly on zeis the most intuitive approach (‘ proposed-perturb (ran-dom) ’ in Section 4.3). However, preserving the semantics of the original latent representation zcannot be guaranteed under the direct random perturbation on the latent space. While the latentspace is not directly interpretable in general, the output logit yof the latent representation zis inter-pretable, because the output logit is tightly coupled to the prediction of the target label. In order topreserve the semantics of the original latent representation after perturbation, we indirectly model asemantic noise on the latent space by adding small random noise directly on the output space.Based on the output (pre-softmax) logit y, the semantic-preserving variation of y(i.e.y0) can bemodeled by y0=y+ye, whereyeis a random noise vector stochastically sampled from a zero-mean Gaussian with small standard deviation ;N(0;2I). Now, the semantic perturbation z0canbe reconstructed from the random perturbation y0through the decoding path g02in Figure 1(c).From the original output logit yand the randomly perturbed output logit y0, semantic additive noisezeon the latent space can be approximately modeled as below:zR=g02(y)z0R=g02(y0) =g02(y+ye)ze'z0RzR=g02(y+ye)g02(y)(9)By using the modeled semantic additive noise zeand the original latent representation z, we canobtain the semantic perturbation z0as well as its output ^yvia Eq. (7) for our target objective Eq. (8).From the described semantic noise modeling process (‘ proposed-perturb (semantic) ’ in Section 4.3),we expect to achieve better representation on the latent space. The effect of the proposed model interms of learned latent representation will be explained in more detail in Section 4.4.4Under review as a conference paper at ICLR 2017(a) (b) Figure 2: Previous works for supervised learning; (a) traditional feed-forward model, and (b) jointlearning model with both supervised and unsupervised losses.3 R ELATED WORKSPrevious works on deep neural networks for supervised learning can be categorized into two types asshown in Figure 2; (a) a general feed-forward neural network model (LeCun et al., 1998; Krizhevskyet al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), and (b) a joint learning model whichoptimizes unsupervised and supervised objectives at the same time (Zhao et al., 2015; Zhang et al.,2016; Cho & Chen, 2014). Here are the corresponding objective functions:min:f1;2gXiLNLL(y(i);t(i)) (10)min:f1;01;2gXiLL2(x(i);x(i)R) +LNLL(y(i);t(i)) (11)whereis a loss weighting coefficient between unsupervised and supervised losses.Since the feed-forward neural network model is normally implemented with multiple layers in adeep learning framework, the joint learning model can be sub-classified into two types according tothe type of reconstruction; reconstruction only with the input data x(Eq. (11)) and reconstructionwith all the intermediate features including the input data xas follows:minXi0@0LL2(x(i);x(i)R) +XjjLL2(h(i)j;h(i)jR) +LNLL(y(i);t(i))1A: (12)whereh(i)jandh(i)jRare thej-th hidden representation of the i-th training example and its reconstruc-tion.Another type of the joint learning model, a ladder network (Figure 3), was introduced for semi-supervised learning (Rasmus et al., 2015). The key concept of the ladder network is to obtainrobust features by learning de-noising functions ( g0) of the representations at every layer of themodel via reconstruction losses, and the supervised loss is combined with the reconstruction lossesin order to build the semi-supervised model. The ladder network achieved the best performance insemi-supervised tasks, but it is not appropriate for supervised tasks with small-scale training set (ex-perimental analysis for supervised learning on permutation-invariant MNIST is briefly summarized+ noise + noise Figure 3: Ladder network; a representative model for semi-supervised learning (Rasmus et al.,2015).5Under review as a conference paper at ICLR 2017in Appendix (A2)). The proposed model in this work can be extended to semi-supervised learning,but our main focus is to enhance the representational power on latent space given labelled data forsupervised learning. We leave the study for semi-supervised learning scenario based on the proposedmethodology as our future research.4 E XPERIMENTSFor quantitative analysis, we compare the proposed methodology with previous approaches de-scribed in Section 3; a traditional feed-forward supervised learning model and a joint learning modelwith two different types of reconstruction losses (reconstruction only with the first layer or with allthe intermediate layers including the first layer). The proposed methodology includes a baselinemodel in Figure 1(b) as well as a stochastic perturbation model in Figure 1(c). Especially in thestochastic perturbation model, we compare the random and semantic perturbations and present somequalitative analysis on the meaning of the proposed perturbation methodology.4.1 D ATASETSWe experiment with two public datasets; MNIST (including a permutation-invariant MNIST case)and CIFAR-10. MNIST (10 classes) consists of 50k, 10k, and 10k 28 28 gray-scale images fortraining, validation, and test datasets, respectively. CIFAR-10 (10 classes) consists of 50k and 10k3232 3-channel images for training and test sets, respectively. We split the 50k CIFAR-10 trainingimages into 40k and 10k for training and validation. Experiments are performed with differentsizes of training set (from 10 examples per class to the entire training set) in order to verify theeffectiveness of the proposed model in terms of generalization performance under varying sizes oftraining set.4.2 I MPLEMENTATIONFigure 4 shows the architecture of the neural network model used in this experiment. W’s areconvolution or fully-connected weights (biases are excluded for visual brevity). Three convolution(33 (2) 32, 33 (2) 64, 33 (2) 96, where each item means the filter kernel size and (stride)with the number of filters) and two fully-connected (the numbers of output nodes are 128 and 10,respectively) layers are used for MNIST. For the permutation-invariant MNIST setting, 784-512-256-256-128-10 nodes of fully-connected layers are used. Four convolution (5 5 (1) 64, 33 (2)64, 33 (2) 64, and 33 (2) 96) and three fully-connected (128, 128, and 10 nodes) layers are usedfor CIFAR-10. Weights on the decoding (reconstruction) path are tied with corresponding weightson the encoding path as shown in Figure 4 (transposed convolution for the tied convolution layerand transposed matrix multiplication for the tied fully-connected layer).In Figure 4, z0is perturbed directly from zby adding Gaussian random noise for random pertur-bation. For semantic perturbation, z0is indirectly generated from y0which is perturbed by addingGaussian random noise on ybased on Eq. (9). For perturbation, base activation vector ( zis the baseFigure 4: Target network architecture; 3 convolution and 2 fully-connected layers were used forMNIST, 5 fully-connected layers were used for permutation-invariant MNIST, and 4 convolutionand 3 fully-connected layers were used for CIFAR-10.6Under review as a conference paper at ICLR 2017Table 1: Error rate (%) on the test set using the model with the best performance on the validationset. Numbers on the first row of each sub-table are the number of randomly chosen per-class train-ing examples. The average performance and the standard deviation of three different random-splitdatasets (except for the case using the entire training set in the last column) are described in this table(error rate on each random set is summarized in Appendix (A3)). Performance of three previous ap-proaches (with gray background; previous-1, 2, 3 are feed-forward model Figure 2(a), joint learningmodel with recon-one Figure 2(b), joint learning model with recon-all Figure 2(b), respectively) andthe proposed methods (proposed-1, 2, 3 are baseline Figure 1(b), random perturbation Figure 1(c),semantic perturbation Figure 1(c), respectively) is summarized.dataset number of per-class examples chosen from 50k entire MNIST training examples entire setMNIST 10 20 50 100 200 500 1k 2k 50kprevious-1 24.55 (3.04) 16.00 (1.33) 10.35 (0.66) 6.58 (0.42) 4.71 (0.28) 2.94 (0.23) 1.90 (0.27) 1.45 (0.08) 1.04previous-2 21.67 (3.19) 13.60 (0.99) 7.85 (0.10) 5.44 (0.37) 4.14 (0.08) 2.50 (0.15) 1.84 (0.07) 1.45 (0.07) 1.12previous-3 20.11 (2.81) 13.69 (0.62) 9.15 (0.15) 6.77 (0.25) 5.39 (0.11) 3.89 (0.27) 2.91 (0.17) 2.28 (0.10) 1.87proposed-1 21.35 (1.16) 11.65 (1.15) 6.33 (0.10) 4.32 (0.31) 3.07 (0.11) 1.98 (0.11) 1.29 (0.09) 0.94 (0.02) 0.80proposed-2 20.17 (1.52) 11.68 (0.81) 6.24 (0.29) 4.12 (0.24) 3.04 (0.13) 1.88 (0.05) 1.24 (0.03) 0.96 (0.08) 0.65proposed-3 20.11 (0.81) 10.59 (0.74) 5.92 (0.12) 3.79 (0.23) 2.72 (0.09) 1.78 (0.05) 1.15 (0.01) 0.88 (0.03) 0.62dataset number of per-class examples chosen from 40k entire CIFAR-10 training examples entire setCIFAR-10 10 20 50 100 200 500 1k 2k 40kprevious-1 73.82 (1.43) 68.99 (0.54) 61.30 (0.83) 54.93 (0.56) 46.97 (0.59) 33.69 (0.43) 26.63 (0.39) 20.97 (0.09) 17.80previous-2 75.68 (1.56) 69.05 (1.13) 61.44 (0.63) 55.02 (0.34) 46.18 (0.51) 33.62 (0.38) 26.78 (0.48) 21.25 (0.40) 17.68previous-3 73.33 (1.06) 67.63 (0.56) 62.59 (0.76) 56.37 (0.20) 50.51 (0.61) 41.26 (0.73) 32.55 (1.20) 26.38 (0.08) 22.71proposed-1 71.63 (0.69) 66.17 (0.40) 58.91 (0.86) 52.65 (0.28) 43.46 (0.30) 31.86 (0.54) 25.76 (0.31) 21.06 (0.18) 17.45proposed-2 71.69 (0.25) 66.75 (0.54) 58.95 (0.63) 53.01 (0.26) 43.71 (0.19) 31.80 (0.18) 25.50 (0.33) 20.81 (0.27) 17.43proposed-3 71.50 (1.14) 66.87 (0.17) 58.30 (0.62) 52.32 (0.08) 42.98 (0.34) 30.91 (0.23) 24.81 (0.26) 20.19 (0.25) 16.16vector for the random perturbation and yis the base vector for the semantic perturbation) is scaled to[0.0, 1.0], and the zero-mean Gaussian noise with 0.2 of standard deviation is added (via element-wise addition) on the normalized base activation. This perturbed scaled activation is de-scaled withthe original min and max activations of the base vector.Initial learning rates are 0.005 and 0.001 for MNIST and permutation-invariant MNIST, and 0.002for CIFAR-10, respectively. The learning rates are decayed by a factor of 5 every 40 epochs until the120-th epoch. For both datasets, the minibatch size is set to 100, and the target objective is optimizedusing Adam optimizer (Kingma & Ba, 2015) with a momentum 0.9. All the ’s for reconstructionlosses in Eq. (11) and Eq. (12) are 0.03 and 0.01 for MNIST and CIFAR-10, respectively. The sameweighting factors for reconstruction losses (0.03 for MNIST and 0.01 for CIFAR-10) are used for1in Eq (8), and 1.0 is used for 2.Input data is first scaled to [0.0, 1.0] and then whitened by the average across all the training exam-ples. In CIFAR-10, random cropping (24 24 image is randomly cropped from the original 32 32image) and random horizontal flipping (mirroring) are used for data augmentation. We selectedthe network that performed best on the validation dataset for evaluation on the test dataset. All theexperiments are performed with TensorFlow (Abadi et al., 2015).4.3 Q UANTITATIVE ANALYSISThree previous approaches (a traditional feed-forward model, a joint learning model with the inputreconstruction loss, and a joint learning model with reconstruction losses of all the intermediatelayers including the input layer) are compared with the proposed methods (the baseline model inFigure 1(b), and the stochastic perturbation model in Figure 1(c) with two different perturbationmethods; random and semantic). We measure the classification performance according to varyingsizes of training set (examples randomly chosen from the original training dataset). Performance isaveraged over three different random trials.7Under review as a conference paper at ICLR 2017(a) (b) Figure 5: Examples reconstructed from the perturbed latent vectors via (a) random perturbation,and (b) semantic perturbation (top row shows the original training examples). More examples aresummarized in Appendix (A4.1).Table 1 summarizes the classification performance for MNIST and CIFAR-10. As we expected,the base model obtained by maximizing the sum of mutual informations ( proposed-base ) mostlyperforms better than previous approaches, and the model with the semantic perturbation ( proposed-perturb (semantic) ) performs best among all the comparison targets. Especially in MNIST, the errorrate of ‘ proposed-perturb (semantic) ’ with 2k per-class training examples is less than the error rateof all types of previous works with the entire training set (approximately 5k per-class examples).We further verify the proposed method on the permutation-invariant MNIST task with a standardfeed-forward neural network. Classification performance is measured against three different sizes oftraining set (1k, 2k, and 5k per-class training examples). ‘ Proposed-perturb (semantic) ’ achieves thebest performance among all the configurations; 2.57%, 1.82%, and 1.28% error rates for 1k, 2k, and5k per-class training examples, respectively. The joint learning model with the input reconstructionloss performs best among three previous approaches; 2.72%, 1.97%, and 1.38% error rates for 1k,2k, and 5k per-class training examples, respectively.4.4 Q UALITATIVE ANALYSISAs mentioned before, random perturbation by adding unstructured noise directly to the latent rep-resentation cannot guarantee preserving the semantics of the original representation. We com-pared two different perturbation methods (random and semantic) by visualizing the examples recon-structed from the perturbed latent vectors (Figure 5). Top row is the original examples selected fromtraining set (among 2k per-class training examples), and the rest are the reconstructions of their per-turbed latent representations. Based on the architecture described in Figure 1(b), we generated fivedifferent perturbed latent representations according to the type of perturbation, and reconstructedthe perturbed latent vectors through decoding path for reconstruction.Figure 5(a) and (b) show the examples reconstructed from the random and semantic perturbations,respectively. For both cases, zero-mean Gaussian random noise (0.2 standard deviation) is used forperturbation. As shown in Figure 5(a), random perturbation partially destroys the original semantics;for example, semantics of ‘1’ is mostly destroyed under random perturbation, and some examplesof ‘3’ are reconstructed as being similar to ‘8’ rather than its original content ‘3’. Figure 5(b)shows the examples reconstructed from the semantic perturbation. The reconstructed examples showsubtle semantic variations while preserving the original semantic contents; for example, thicknessdifference in ‘3’ (example on the third row) or writing style difference in ‘8’ (openness of the topleft corner).Figure 6 shows the overall effect of the perturbation. In this analysis, 100 per-class MNIST exam-ples are used for training. From the trained model based on the architecture described in Figure 1(b),latent representations zof all the 50k examples (among 50k examples, only 1k examples were usedfor training) are visualized by using t-SNE (Maaten & Hinton, 2008). Only the training examples ofthree classes (0, 1, and 9) among ten classes are depicted as black circles for visual discrimination in8Under review as a conference paper at ICLR 2017(a) 0123456789(b) (c) Figure 6: Training examples (circles or crosses with colors described below) over the examplesnot used for training (depicted as background with different colors); (a) training examples (blackcircles), (b) training examples (yellow circles) with 3 random-perturbed samples (blue crosses),and (c) training examples (yellow circles) with 3 semantic-perturbed samples (blue crosses). Bestviewed in color.Figure 6(a). The rest of the examples which were not used for training (approximately 4.9k exam-ples per class) are depicted as a background with different colors. We treat the colored backgroundexamples (not used for training) as a true distribution of unseen data in order to estimate the gener-alization level of learned representation according to the type of perturbation. Figure 6(b) and (c)show the training examples (100 examples per class with yellow circles) and their perturbed ones(3sampled from each example with blue crosses) through random and semantic perturbations,respectively.In Figure 6(b), perturbed samples are distributed near the original training examples, but some sam-ples outside the true distribution cannot be identified easily with appropriate classes. This can beexplained with Figure 5(a), since some perturbed samples are ambiguous semantically. In Fig-ure 6(c), however, most of the perturbed samples evenly cover the true distribution. As mentionedbefore, stochastic perturbation with the semantic additive noise during training implicitly incurs theeffect of augmentation on the latent space while resulting in better generalization. Per-class t-SNEresults are summarized in Appendix (A4.2).5 D ISCUSSIONWe introduced a novel latent space modeling method for supervised tasks based on the standardfeed-forward neural network architecture. The presented model simultaneously optimizes both su-pervised and unsupervised losses based on the assumption that the better latent representation canbe obtained by maximizing the sum of hierarchical mutual informations. Especially the stochas-tic perturbation process which is achieved by modeling the semantic additive noise during trainingenhances the representational power of the latent space. From the proposed semantic noise model-ingprocess, we can expect improvement of generalization performance in supervised learning withimplicit semantic augmentation effect on the latent space.The presented model architecture can be intuitively extended to semi-supervised learning becauseit is implemented as the joint optimization of supervised and unsupervised objectives. For semi-supervised learning, however, logical link between features learned from labelled and unlabelleddata needs to be considered additionally. We leave the extension of the presented approach to semi-supervised learning for the future.REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-9Under review as a conference paper at ICLR 2017cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In International Conference on Learning Representations (ICLR) ,2015.Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise trainingof deep networks. In Advances in Neural Information Processing Systems (NIPS) , 2007.Kyunghyun Cho and Xi Chen. Classifying and visualizing motion capture sequences using deepneural networks. In International Conference on Computer Vision Theory and Applications , 2014.Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deepneural networks with multitask learning. In International Conference on Machine Learning(ICML) , 2008.Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep boltz-mann machines. In Advances in Neural Information Processing Systems (NIPS) , 2013.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In International conference on acoustics, speech and signal processing ,2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In Computer Vision and Pattern Recognition (CVPR) , 2016.Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networksfor acoustic modeling in speech recognition: The shared views of four research groups. SignalProcessing Magazine, IEEE , 29(6):82–97, 2012.Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep beliefnets. Neural Computation , 18:1527–1554, 2006.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations (ICLR) , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008a.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008b.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Jonathan Masci, Ueli Meier, Dan Cires ̧an, and J ̈urgen Schmidhuber. Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial NeuralNetworks , 2011.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems(NIPS) , 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In Artificial Intelligenceand Statistics Conference (AISTATS) , 2009.10Under review as a conference paper at ICLR 2017Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. In International Conference on Learning Representations (ICLR) , 2015.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research (JMLR) , 11:3371–3408, 2010.Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of re-search and development , 4(1):66–82, 1960.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In International Conference on MachineLearning (ICML) , 2016.Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.InInternational Conference on Learning Representations (ICLR) , 2015.11Under review as a conference paper at ICLR 2017APPENDIX(A1) D ERIVATION OF RECONSTRUCTION ERRORS FROM CONDITIONAL ENTROPY TERMSExtended from Section 2. From the lower bound in Eq. (3), we consider the following optimizationproblem (refer to ‘ Section 2. From mutual information to autoencoders ’ in (Vincent et al., 2010)):maxf1;01;2;02gEq(X;Z;Y )[logq(XjZ)] +Eq(X;Z;Y )[logq(ZjY)]: (13)Here, we denote q(X;Z;Y )an unknown joint distribution. Note that ZandYare respectivelythe variables transformed from parametric mappings Z=f1(X)andY=f2(Z)(see Fig. 1).q(X;Z;Y )then can be reduced to q(X)fromq(ZjX;1) =(Zf1(X))andq(YjZ;2) =(Yf2(Z))wheredenotes Dirac-delta function.From the Kullback-Leibler divergence that DKL(qjjp)0for any two distributions pandq, theoptimization in Eq. (13) corresponds to the following optimization problem where p()denotes aparametric distribution:maxf1;01;2;02gEq(X)[logp(XjZ;01)] +Eq(X)[logp(ZjY;02)]: (14)By replacing q(X)with a sample distribution q0(X)and putting all parametric dependencies be-tweenX,ZandY, we will havemaxf1;01;2;02gEq0(X)[logp(XjZ=f1(X);01)] +Eq0(X)[logp(ZjY=f2(f1(X));02)]:(15)For a given input sample xofX, it is general to interpret xRandzRas the parameters of distributionsp(XjXR=xR)andp(ZjZR=zR)which reconstruct xandzwith high probability (i.e. xRandzRare not exact reconstructions of xandz). SincexRandzRare real-valued, we assume Gaussiandistribution for these conditional distributions, that is,p(XjXR=xR) =N(xR; 20I)p(ZjZR=zR) =N(zR; 20I):(16)The assumptions yield logp(j)/LL2(;).With the following relations for logterms in Eq. (15),p(XjZ=f1(x);01) =p(XjXR=g01(f1(x)))p(ZjY=f2(f1(x));02) =p(ZjZR=g02(f2(f1(x)));(17)the optimization problem in Eq. (15) corresponds to the minimization problem of reconstructionerrors for input examples x(i)as below:minf1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R): (18)12Under review as a conference paper at ICLR 2017(A2) L ADDER NETWORK ,A REPRESENTATIVE SEMI -SUPERVISED LEARNING MODELExtended from Section 3. We performed experiments with a ladder network model (Rasmus et al.,2015) in order to estimate the performance on pure supervised tasks according to different sizes oftraining set. We used the code (https://github.com/rinuboney/ladder.git) for this experiment. Thenetwork architecture implemented on the source code is used as is; (784-1000-500-250-250-250-10). Based on the same network architecture, we implemented the proposed stochastic perturbationmodel described in Figure 1(c) and compared the classification performance with the ladder networkas described in Table 2 (we did not focus on searching the optimal hyperparameters for the proposedmodel in this experiment). As summarized in the bottom of the table (mean over 3 random trials),the proposed semantic noise modeling method shows a fairly large performance gain compared tothe ladder network model with small-scale datasets (e.g., in a case of 10 per-class training examples,the proposed method achieves 22.11% of error rate, while the ladder network shows 29.66%).Table 2: Classification performance (error rate in %) of the ladder network and the proposed modelon three different sets of randomly chosen training examples (MNIST).set No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91proposed-perturb (semantic); Figure 1(c) 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37proposed-perturb (semantic); Figure 1(c) 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47proposed-perturb (semantic); Figure 1(c) 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91proposed-perturb (semantic); Figure 1(c) 22.11 13.89 9.01 6.11 4.45 2.95 1.99 1.40 0.9313Under review as a conference paper at ICLR 2017(A3) Q UANTITATIVE ANALYSISExtended from Section 4.3. Among the total 50k and 40k training examples in MNIST and CIFAR-10, we randomly select the examples for training. Classification performance according to threedifferent randomly chosen training sets are summarized in Table 3 (MNIST) and Table 4 (CIFAR-10). Further experiments with denoising constraints are also included. Zero-mean Gaussian randomnoise with 0.1 standard deviation is used for noise injection. Denoising function helps to achieveslightly better performance on MNIST, but it results in performance degradation on CIFAR-10 (wedid not focus on searching the optimal parameters for noise injection in this experiments).Table 3: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (MNIST).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 5kfeed-forward model; Figure 2(a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04joint learning model with recon-one; Figure 2(b) 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12joint learning model with recon-one with denoising constraints 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97joint learning model with recon-all; Figure 2(b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36proposed-base; Figure 1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13proposed-perturb (random); Figure 1(c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65proposed-perturb (semantic); Figure 1(c) 19.33 9.72 5.98 3.47 2.84 1.84 1.16 0.84 0.62Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 28.84 17.36 10.14 6.20 4.78 3.02 1.61 1.41joint learning model with recon-one; Figure 2(b) 26.09 14.40 7.98 5.18 4.17 2.29 1.94 1.52joint learning model with recon-one with denoising constraints 27.69 13.11 6.95 5.07 3.54 2.37 1.83 1.28joint learning model with recon-all; Figure 2(b) 24.01 14.13 8.98 6.84 5.44 3.51 2.98 2.18joint learning model with recon-all with denoising constraints 23.05 13.29 7.79 5.12 3.92 3.01 2.27 1.84proposed-base; Figure 1(b) 22.95 12.98 6.27 4.43 3.22 2.14 1.37 0.96proposed-base with denoising constraints 26.96 12.21 6.45 4.62 3.13 2.53 1.88 1.49proposed-perturb (random); Figure 1(c) 22.10 12.52 5.97 4.26 2.86 1.94 1.23 0.92proposed-perturb (semantic); Figure 1(c) 21.22 11.52 5.75 3.91 2.61 1.73 1.14 0.89Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 22.20 16.43 9.67 7.16 5.02 3.17 2.25 1.39joint learning model with recon-one; Figure 2(b) 20.23 14.19 7.73 5.96 4.22 2.62 1.79 1.35joint learning model with recon-one with denoising constraints 19.32 12.25 7.44 5.39 3.58 2.37 1.49 1.56joint learning model with recon-all; Figure 2(b) 17.51 14.12 9.12 7.04 5.49 4.05 3.08 2.25joint learning model with recon-all with denoising constraints 17.07 12.50 7.86 5.48 4.05 2.97 2.02 1.98proposed-base; Figure 1(b) 20.86 11.79 6.25 4.63 2.96 1.91 1.16 0.96proposed-base with denoising constraints 19.89 11.30 6.26 4.57 3.50 2.63 1.61 1.47proposed-perturb (random); Figure 1(c) 20.02 11.94 6.12 4.32 3.13 1.81 1.28 1.08proposed-perturb (semantic); Figure 1(c) 19.78 10.53 6.03 4.00 2.70 1.76 1.14 0.9214Under review as a conference paper at ICLR 2017Table 4: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (CIFAR-10).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 4kfeed-forward model; Figure 2(a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80joint learning model with recon-one; Figure 2(b) 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73joint learning model with recon-all; Figure 2(b) 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41proposed-base; Figure 1(b) 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34proposed-perturb (random); Figure 1(c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43proposed-perturb (semantic); Figure 1(c) 71.59 66.90 58.64 52.34 42.74 30.94 24.45 20.10 16.16Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 72.39 69.49 60.45 54.85 46.91 33.39 26.73 21.00joint learning model with recon-one; Figure 2(b) 74.06 69.14 60.71 54.54 45.70 33.54 27.43 20.90joint learning model with recon-one with denoising constraints 76.40 69.33 60.28 55.38 47.40 36.29 29.31 24.60joint learning model with recon-all; Figure 2(b) 72.28 67.60 61.53 56.65 49.99 42.08 32.99 26.33joint learning model with recon-all with denoising constraints 73.90 69.23 61.90 57.99 52.35 45.12 37.23 30.14proposed-base; Figure 1(b) 72.49 65.62 57.82 52.66 43.20 32.24 25.60 21.32proposed-base with denoising constraints 72.99 66.75 57.78 53.81 44.33 33.56 28.40 25.03proposed-perturb (random); Figure 1(c) 71.84 65.98 58.08 53.37 43.44 31.56 25.69 21.03proposed-perturb (semantic); Figure 1(c) 72.85 66.65 57.44 52.21 42.74 31.17 24.99 20.54Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 75.78 68.24 61.02 54.29 46.28 33.38 26.11 20.85joint learning model with recon-one; Figure 2(b) 77.79 67.62 61.37 55.22 45.96 33.21 26.29 21.81joint learning model with recon-one with denoising constraints 76.60 69.27 61.13 55.10 47.50 37.12 29.63 24.88joint learning model with recon-all; Figure 2(b) 72.92 66.97 63.31 56.23 50.16 41.41 33.75 26.31joint learning model with recon-all with denoising constraints 76.83 68.53 65.58 58.29 52.43 45.42 39.01 32.32proposed-base; Figure 1(b) 71.60 66.31 58.99 52.30 43.88 31.10 25.48 20.95proposed-base with denoising constraints 72.39 67.20 60.60 52.64 44.62 33.52 28.01 25.25proposed-perturb (random); Figure 1(c) 71.34 67.15 59.55 52.86 43.81 32.01 25.78 20.42proposed-perturb (semantic); Figure 1(c) 70.06 67.07 58.83 52.41 43.47 30.61 25.00 19.9415Under review as a conference paper at ICLR 2017(A4.1) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 7 shows reconstructed examples from perturbed (random orsemantic) latent representations (refer to Figure 5 and the analysis described in Section 4.4).Example.1 random perturbation Example.1 semantic perturbation Example.2 random perturbation Example.2 semantic perturbation Figure 7: For each example, top row is the original examples selected from the training set, andthe rest are reconstructed from the perturbed representations via random (left) and semantic (right)perturbations.16Under review as a conference paper at ICLR 2017(A4.2) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 8 shows the t-SNE results per class on MNIST. The overalltendency is similar to the description in Section 4.4.17Under review as a conference paper at ICLR 2017Figure 8: From top to bottom: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. From left to right: training exam-ples (circle), training examples (circle) + random-perturbed samples (cross), and training examples(circle) + semantic-perturbed samples (cross). Best viewed in color.18
rJvfq-GEg
SyCSsUDee
ICLR.cc/2017/conference/-/paper44/official/review
{"title": "Semantic noise modelling", "rating": "2: Strong rejection", "review": "This paper introduces a maximum total correlation procedure, adds a target and then adds noise perturbations.\n\nTechnical issues:\n\nThe move from (1) to (2) is problematic. Yes it is a lower bound, but by igoring H(Z), equation (2) ignores the fact that H(Z) will potentially vary more significantly that H(Z|Y). As a result of removing H(Z), the objective (2) encourages Z that are low entropy as the H(Z) term is ignored, doubly so as low entropy Z results in low entropy Z|Y. Yes the -H(X|Z) mitigates against a complete entropy collapse for H(Z), but it still neglects critical terms. In fact one might wonder if this is the reason that semantic noise addition needs to be done anyway, just to push up the entropy of Z to stop it reducing too much.\n\nIn (3) arbitrary balancing paramters lamda_1 and lambda_2 are introduced ex-nihilo - they were not there in (2). This is not ever justified.\n\nThen in (5), a further choice is made by simply adding L_{NLL} to the objective. But in the supervised case, the targets are known and so turn up in H(Z|Y). Hence now H(Z|Y) should be conditioned on the targets. However instead another objective is added again without justification, and the conditional entropy of Z is left disconnected from the data it is to be conditioned on. One might argue the C(X,Y,Z) simply acts as a prior on the networks (and hence implicitly on the weights) that we consider, which is then combined with a likelihood term, but this case is not made. In fact there is no explicit probabilistic or information theoretic motivation for the chosen objective.\n\nGiven these issues, it is then not too surprising that some further things need to be done, such as semantic noise addition to actually get things working properly. It may be the form of noise addition is a good idea, but given the troublesome objective being used in the first place, it is very hard to draw conclusions.\n\nIn summary, substantially better theoretical justification of the chosen model is needed, before any reasonable conclusion on the semantic noise modelling can be made.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semantic Noise Modeling for Better Representation Learning
["Hyo-Eun Kim", "Sangheum Hwang", "Kyunghyun Cho"]
Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=SyCSsUDee
https://openreview.net/pdf?id=SyCSsUDee
https://openreview.net/forum?id=SyCSsUDee&noteId=rJvfq-GEg
Under review as a conference paper at ICLR 2017SEMANTIC NOISE MODELING FORBETTER REPRESENTATION LEARNINGHyo-Eun Kimand Sangheum HwangLunit Inc.Seoul, South Koreafhekim, shwang g@lunit.ioKyunghyun ChoCourant Institute of Mathematical Sciences and Centre for Data ScienceNew York UniversityNew York, NY 10012, USAkyunghyun.cho@nyu.eduABSTRACTLatent representation learned from multi-layered neural networks via hierarchicalfeature abstraction enables recent success of deep learning. Under the deep learn-ing framework, generalization performance highly depends on the learned latentrepresentation. In this work, we propose a novel latent space modeling method tolearn better latent representation. We designed a neural network model based onthe assumption that good base representation for supervised tasks can be attainedby maximizing the sum of hierarchical mutual informations between the input,latent, and output variables. From this base model, we introduce a semantic noisemodeling method which enables semantic perturbation on the latent space to en-hance the representational power of learned latent feature. During training, latentvector representation can be stochastically perturbed by a modeled additive noisewhile preserving its original semantics. It implicitly brings the effect of semanticaugmentation on the latent space. The proposed model can be easily learned byback-propagation with common gradient-based optimization algorithms. Experi-mental results show that the proposed method helps to achieve performance ben-efits against various previous approaches. We also provide the empirical analysesfor the proposed latent space modeling method including t-SNE visualization.1 I NTRODUCTIONEnhancing the generalization performance against unseen data given some sample data is the mainobjective in machine learning. Under that point of view, deep learning has been achieved manybreakthroughs in several domains such as computer vision (Krizhevsky et al., 2012; Simonyan &Zisserman, 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bah-danau et al., 2015), and speech recognition (Hinton et al., 2012; Graves et al., 2013). Deep learningis basically realized on deep layered neural network architecture, and it learns appropriate task-specific latent representation based on given training data. Better latent representation learned fromtraining data results in better generalization over the future unseen data. Representation learningor latent space modeling becomes one of the key research topics in deep learning. During the pastdecade, researchers focused on unsupervised representation learning and achieved several remark-able landmarks on deep learning history (Vincent et al., 2010; Hinton et al., 2006; Salakhutdinov &Hinton, 2009). In terms of utilizing good base features for supervised learning, the base representa-tion learned from unsupervised learning can be a good solution for supervised tasks (Bengio et al.,2007; Masci et al., 2011).The definition of ‘good’ representation is, however, different according to target tasks. In unsuper-vised learning, a model is learned from unlabelled examples. Its main objective is to build a modelCorresponding author1Under review as a conference paper at ICLR 2017to estimate true data distribution given examples available for training, so the learned latent rep-resentation normally includes broadly-informative components of the raw input data (e.g., mutualinformation between the input and the latent variable can be maximized for this objective). In su-pervised learning, however, a model is learned from labelled examples. In the case of classification,a supervised model learns to discriminate input data in terms of the target task using correspond-ing labels. Latent representation is therefore obtained to maximize the performance on the targetsupervised tasks.Since the meaning of good representations vary according to target tasks (unsupervised or super-vised), pre-trained features from the unsupervised model are not be guaranteed to be useful forsubsequent supervised tasks. Instead of the two stage learning strategy (unsupervised pre-trainingfollowed by supervised fine-tuning), several works focused on a joint learning model which opti-mizes unsupervised and supervised objectives concurrently, resulting in better generalization per-formance (Goodfellow et al., 2013; Larochelle & Bengio, 2008a; Rasmus et al., 2015; Zhao et al.,2015; Zhang et al., 2016; Cho & Chen, 2014).In this work, we propose a novel latent space modeling method for supervised learning as an exten-sion of the joint learning approach. We define a good latent representation of standard feed-forwardneural networks under the basis of information theory. Then, we introduce a semantic noise model-ingmethod in order to enhance the generalization performance. The proposed method stochasticallyperturbs the latent representation of a training example by injecting a modeled semantic additivenoise. Since the additive noise is randomly sampled from a pre-defined probability distribution ev-ery training iteration, different latent vectors from a single training example can be fully utilizedduring training. The multiple different latent vectors produced from a single training example aresemantically similar under the proposed latent space modeling method, so we can expect semanticaugmentation effect on the latent space.Experiments are performed on two datasets; MNIST and CIFAR-10. The proposed model results inbetter classification performance compared to previous approaches through notable generalizationeffect (stochastically perturbed training examples well cover the distribution of unseen data).2 M ETHODOLOGYThe proposed method starts from the existing joint learning viewpoint. This section first explainsthe process of obtaining a good base representation for supervised learning which is the basis of theproposed latent space modeling method. And then, we will describe how the proposed semanticnoise modeling method perturbs the latent space while maintaining the original semantics.2.1 B ASE JOINT LEARNING MODELIn a traditional feed-forward neural network model (Figure 1(a)), output Yof input data Xis com-pared with its true label, and the error is propagated backward from top to bottom, which implicitlylearns a task-specific latent representation Zof the input X. As an extension of a joint learningapproach, an objective to be optimized can be described in general as below (Larochelle & Bengio,2008b):minLunsup +Lsup (1)whereLunsup andLsupare respectively an unsupervised loss and a supervised loss, and andare model parameters to be optimized during training and a loss weighting coefficient, respectively.In terms of modeling Lunsup in Eq. (1), we assume that good latent representation Zis attainedby maximizing the sum of hierarchical mutual informations between the input, latent, and outputvariables; i.e. the sum of the mutual information between the input Xand theZand the mutualinformation between the Zand the output Y. Each mutual information is decomposed into anentropy and a conditional entropy terms, so the sum of hierarchical mutual informations is expressedas follows:I(X;Z) +I(Z;Y) =H(X)H(XjZ) +H(Z)H(ZjY) (2)2Under review as a conference paper at ICLR 2017X Z YX Z YXR ZRX Z YXR ZR ZP YP(a) (b) (c) Figure 1: (a) Standard feed-forward neural network model, (b) feed-forward neural network modelwith reconstruction paths, and (c) feed-forward neural network model with reconstruction andstochastic perturbation paths.where I(;)is the mutual information between random variables, and H()andH(j)are the entropyand the conditional entropy of random variables, respectively. Note that the sum of those mutualinformations becomes equivalent to the total correlation of X,Z, andYunder the graphical structureof the general feed-forward model described in Figure 1(a); P(X;Z;Y ) =P(YjZ)P(ZjX)P(X).The total correlation is equal to the sum of all pairwise mutual informations (Watanabe, 1960).Our objective is to find the model parameters which maximize I(X;Z) +I(Z;Y). Since H(X)andH(Z)are non-negative, and H(X)is constant in this case, the lower bound on I(X;Z) +I(Z;Y)can be reduced to1:I(X;Z) +I(Z;Y)H(XjZ)H(ZjY): (3)It is known that maximizing H(XjZ)can be formulated as minimizing the reconstruction errorbetween the input x(i)(i-th example sampled from X) and its reconstruction x(i)Runder the generalaudo-encoder framework (Vincent et al., 2010). Since H(XjZ) +H(ZjY)is proportional to thesum of reconstruction errors of x(i)(with its reconstruction x(i)R) andz(i)(with its reconstructionz(i)R), the target objective can be expressed as follows (refer to Appendix (A1) for the details ofmathematical derivations):minXiLrec(x(i);x(i)R) +Lrec(z(i);z(i)R) (4)whereLrecis a reconstruction loss.Figure 1(b) shows the target model obtained from the assumption that good latent representation Zcan be obtained by maximizing the sum of hierarchical mutual informations. Given an input samplex, feed-forward vectors and their reconstructions are attained deterministically by:z=f1(x)y=f2(f1(x))xR=g01(z) =g01(f1(x))zR=g02(y) =g02(f2(f1(x)):(5)1Although H(Z)is an upper bound of H(ZjY),H(Z)is anyway affected by the process of H(ZjY)beingminimized in Eq. (3). In Section 4, we experimentally show that we can obtain good base model even from therelatively loose lower bound defined in Eq. (3).3Under review as a conference paper at ICLR 2017Given a set of training pairs ( x(i),t(i)) wherex(i)andt(i)are thei-th input example and its label,target objective in Eq. (1) under the model described in Figure 1(b) can be organized as below (withreal-valued input samples, L2 loss LL2is a proper choice for the reconstruction loss Lrec):min:f1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R)+LNLL(y(i);t(i)) (6)whereLNLL is a negative log-likelihood loss for the target supervised task. Note that Eq. (6)represents the ‘ proposed-base ’ in our experiment (see Section 4.3).2.2 S EMANTIC NOISE MODELINGBased on the architecture shown in Figure 1(b) with the target objective in Eq. (6), we conjecturethat stochastic perturbation on the latent space during training helps to achieve better generalizationperformance for supervised tasks. Figure 1(c) shows this strategy which integrates the stochasticperturbation process during training. Suppose that ZPis a perturbed version of Z, andYPis anoutput which is feed-forwarded from ZP. Given a latent vector z=f1(x)from an input sample x,z0=z+zeand^y=f2(z0) (7)wherez0and^yare a perturbed latent vector and its output respectively, and zeis an additive noiseused in the perturbation process of z. Based on the architecture shown in Figure 1(c), target objectivecan be modified as:min:f1;01;2;02gXi1LL2(x(i);x(i)R) +LL2(z(i);z(i)R)+2LNLL(y(i);t(i)) +LNLL(^y(i);t(i)):(8)Using random additive noise directly on zeis the most intuitive approach (‘ proposed-perturb (ran-dom) ’ in Section 4.3). However, preserving the semantics of the original latent representation zcannot be guaranteed under the direct random perturbation on the latent space. While the latentspace is not directly interpretable in general, the output logit yof the latent representation zis inter-pretable, because the output logit is tightly coupled to the prediction of the target label. In order topreserve the semantics of the original latent representation after perturbation, we indirectly model asemantic noise on the latent space by adding small random noise directly on the output space.Based on the output (pre-softmax) logit y, the semantic-preserving variation of y(i.e.y0) can bemodeled by y0=y+ye, whereyeis a random noise vector stochastically sampled from a zero-mean Gaussian with small standard deviation ;N(0;2I). Now, the semantic perturbation z0canbe reconstructed from the random perturbation y0through the decoding path g02in Figure 1(c).From the original output logit yand the randomly perturbed output logit y0, semantic additive noisezeon the latent space can be approximately modeled as below:zR=g02(y)z0R=g02(y0) =g02(y+ye)ze'z0RzR=g02(y+ye)g02(y)(9)By using the modeled semantic additive noise zeand the original latent representation z, we canobtain the semantic perturbation z0as well as its output ^yvia Eq. (7) for our target objective Eq. (8).From the described semantic noise modeling process (‘ proposed-perturb (semantic) ’ in Section 4.3),we expect to achieve better representation on the latent space. The effect of the proposed model interms of learned latent representation will be explained in more detail in Section 4.4.4Under review as a conference paper at ICLR 2017(a) (b) Figure 2: Previous works for supervised learning; (a) traditional feed-forward model, and (b) jointlearning model with both supervised and unsupervised losses.3 R ELATED WORKSPrevious works on deep neural networks for supervised learning can be categorized into two types asshown in Figure 2; (a) a general feed-forward neural network model (LeCun et al., 1998; Krizhevskyet al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), and (b) a joint learning model whichoptimizes unsupervised and supervised objectives at the same time (Zhao et al., 2015; Zhang et al.,2016; Cho & Chen, 2014). Here are the corresponding objective functions:min:f1;2gXiLNLL(y(i);t(i)) (10)min:f1;01;2gXiLL2(x(i);x(i)R) +LNLL(y(i);t(i)) (11)whereis a loss weighting coefficient between unsupervised and supervised losses.Since the feed-forward neural network model is normally implemented with multiple layers in adeep learning framework, the joint learning model can be sub-classified into two types according tothe type of reconstruction; reconstruction only with the input data x(Eq. (11)) and reconstructionwith all the intermediate features including the input data xas follows:minXi0@0LL2(x(i);x(i)R) +XjjLL2(h(i)j;h(i)jR) +LNLL(y(i);t(i))1A: (12)whereh(i)jandh(i)jRare thej-th hidden representation of the i-th training example and its reconstruc-tion.Another type of the joint learning model, a ladder network (Figure 3), was introduced for semi-supervised learning (Rasmus et al., 2015). The key concept of the ladder network is to obtainrobust features by learning de-noising functions ( g0) of the representations at every layer of themodel via reconstruction losses, and the supervised loss is combined with the reconstruction lossesin order to build the semi-supervised model. The ladder network achieved the best performance insemi-supervised tasks, but it is not appropriate for supervised tasks with small-scale training set (ex-perimental analysis for supervised learning on permutation-invariant MNIST is briefly summarized+ noise + noise Figure 3: Ladder network; a representative model for semi-supervised learning (Rasmus et al.,2015).5Under review as a conference paper at ICLR 2017in Appendix (A2)). The proposed model in this work can be extended to semi-supervised learning,but our main focus is to enhance the representational power on latent space given labelled data forsupervised learning. We leave the study for semi-supervised learning scenario based on the proposedmethodology as our future research.4 E XPERIMENTSFor quantitative analysis, we compare the proposed methodology with previous approaches de-scribed in Section 3; a traditional feed-forward supervised learning model and a joint learning modelwith two different types of reconstruction losses (reconstruction only with the first layer or with allthe intermediate layers including the first layer). The proposed methodology includes a baselinemodel in Figure 1(b) as well as a stochastic perturbation model in Figure 1(c). Especially in thestochastic perturbation model, we compare the random and semantic perturbations and present somequalitative analysis on the meaning of the proposed perturbation methodology.4.1 D ATASETSWe experiment with two public datasets; MNIST (including a permutation-invariant MNIST case)and CIFAR-10. MNIST (10 classes) consists of 50k, 10k, and 10k 28 28 gray-scale images fortraining, validation, and test datasets, respectively. CIFAR-10 (10 classes) consists of 50k and 10k3232 3-channel images for training and test sets, respectively. We split the 50k CIFAR-10 trainingimages into 40k and 10k for training and validation. Experiments are performed with differentsizes of training set (from 10 examples per class to the entire training set) in order to verify theeffectiveness of the proposed model in terms of generalization performance under varying sizes oftraining set.4.2 I MPLEMENTATIONFigure 4 shows the architecture of the neural network model used in this experiment. W’s areconvolution or fully-connected weights (biases are excluded for visual brevity). Three convolution(33 (2) 32, 33 (2) 64, 33 (2) 96, where each item means the filter kernel size and (stride)with the number of filters) and two fully-connected (the numbers of output nodes are 128 and 10,respectively) layers are used for MNIST. For the permutation-invariant MNIST setting, 784-512-256-256-128-10 nodes of fully-connected layers are used. Four convolution (5 5 (1) 64, 33 (2)64, 33 (2) 64, and 33 (2) 96) and three fully-connected (128, 128, and 10 nodes) layers are usedfor CIFAR-10. Weights on the decoding (reconstruction) path are tied with corresponding weightson the encoding path as shown in Figure 4 (transposed convolution for the tied convolution layerand transposed matrix multiplication for the tied fully-connected layer).In Figure 4, z0is perturbed directly from zby adding Gaussian random noise for random pertur-bation. For semantic perturbation, z0is indirectly generated from y0which is perturbed by addingGaussian random noise on ybased on Eq. (9). For perturbation, base activation vector ( zis the baseFigure 4: Target network architecture; 3 convolution and 2 fully-connected layers were used forMNIST, 5 fully-connected layers were used for permutation-invariant MNIST, and 4 convolutionand 3 fully-connected layers were used for CIFAR-10.6Under review as a conference paper at ICLR 2017Table 1: Error rate (%) on the test set using the model with the best performance on the validationset. Numbers on the first row of each sub-table are the number of randomly chosen per-class train-ing examples. The average performance and the standard deviation of three different random-splitdatasets (except for the case using the entire training set in the last column) are described in this table(error rate on each random set is summarized in Appendix (A3)). Performance of three previous ap-proaches (with gray background; previous-1, 2, 3 are feed-forward model Figure 2(a), joint learningmodel with recon-one Figure 2(b), joint learning model with recon-all Figure 2(b), respectively) andthe proposed methods (proposed-1, 2, 3 are baseline Figure 1(b), random perturbation Figure 1(c),semantic perturbation Figure 1(c), respectively) is summarized.dataset number of per-class examples chosen from 50k entire MNIST training examples entire setMNIST 10 20 50 100 200 500 1k 2k 50kprevious-1 24.55 (3.04) 16.00 (1.33) 10.35 (0.66) 6.58 (0.42) 4.71 (0.28) 2.94 (0.23) 1.90 (0.27) 1.45 (0.08) 1.04previous-2 21.67 (3.19) 13.60 (0.99) 7.85 (0.10) 5.44 (0.37) 4.14 (0.08) 2.50 (0.15) 1.84 (0.07) 1.45 (0.07) 1.12previous-3 20.11 (2.81) 13.69 (0.62) 9.15 (0.15) 6.77 (0.25) 5.39 (0.11) 3.89 (0.27) 2.91 (0.17) 2.28 (0.10) 1.87proposed-1 21.35 (1.16) 11.65 (1.15) 6.33 (0.10) 4.32 (0.31) 3.07 (0.11) 1.98 (0.11) 1.29 (0.09) 0.94 (0.02) 0.80proposed-2 20.17 (1.52) 11.68 (0.81) 6.24 (0.29) 4.12 (0.24) 3.04 (0.13) 1.88 (0.05) 1.24 (0.03) 0.96 (0.08) 0.65proposed-3 20.11 (0.81) 10.59 (0.74) 5.92 (0.12) 3.79 (0.23) 2.72 (0.09) 1.78 (0.05) 1.15 (0.01) 0.88 (0.03) 0.62dataset number of per-class examples chosen from 40k entire CIFAR-10 training examples entire setCIFAR-10 10 20 50 100 200 500 1k 2k 40kprevious-1 73.82 (1.43) 68.99 (0.54) 61.30 (0.83) 54.93 (0.56) 46.97 (0.59) 33.69 (0.43) 26.63 (0.39) 20.97 (0.09) 17.80previous-2 75.68 (1.56) 69.05 (1.13) 61.44 (0.63) 55.02 (0.34) 46.18 (0.51) 33.62 (0.38) 26.78 (0.48) 21.25 (0.40) 17.68previous-3 73.33 (1.06) 67.63 (0.56) 62.59 (0.76) 56.37 (0.20) 50.51 (0.61) 41.26 (0.73) 32.55 (1.20) 26.38 (0.08) 22.71proposed-1 71.63 (0.69) 66.17 (0.40) 58.91 (0.86) 52.65 (0.28) 43.46 (0.30) 31.86 (0.54) 25.76 (0.31) 21.06 (0.18) 17.45proposed-2 71.69 (0.25) 66.75 (0.54) 58.95 (0.63) 53.01 (0.26) 43.71 (0.19) 31.80 (0.18) 25.50 (0.33) 20.81 (0.27) 17.43proposed-3 71.50 (1.14) 66.87 (0.17) 58.30 (0.62) 52.32 (0.08) 42.98 (0.34) 30.91 (0.23) 24.81 (0.26) 20.19 (0.25) 16.16vector for the random perturbation and yis the base vector for the semantic perturbation) is scaled to[0.0, 1.0], and the zero-mean Gaussian noise with 0.2 of standard deviation is added (via element-wise addition) on the normalized base activation. This perturbed scaled activation is de-scaled withthe original min and max activations of the base vector.Initial learning rates are 0.005 and 0.001 for MNIST and permutation-invariant MNIST, and 0.002for CIFAR-10, respectively. The learning rates are decayed by a factor of 5 every 40 epochs until the120-th epoch. For both datasets, the minibatch size is set to 100, and the target objective is optimizedusing Adam optimizer (Kingma & Ba, 2015) with a momentum 0.9. All the ’s for reconstructionlosses in Eq. (11) and Eq. (12) are 0.03 and 0.01 for MNIST and CIFAR-10, respectively. The sameweighting factors for reconstruction losses (0.03 for MNIST and 0.01 for CIFAR-10) are used for1in Eq (8), and 1.0 is used for 2.Input data is first scaled to [0.0, 1.0] and then whitened by the average across all the training exam-ples. In CIFAR-10, random cropping (24 24 image is randomly cropped from the original 32 32image) and random horizontal flipping (mirroring) are used for data augmentation. We selectedthe network that performed best on the validation dataset for evaluation on the test dataset. All theexperiments are performed with TensorFlow (Abadi et al., 2015).4.3 Q UANTITATIVE ANALYSISThree previous approaches (a traditional feed-forward model, a joint learning model with the inputreconstruction loss, and a joint learning model with reconstruction losses of all the intermediatelayers including the input layer) are compared with the proposed methods (the baseline model inFigure 1(b), and the stochastic perturbation model in Figure 1(c) with two different perturbationmethods; random and semantic). We measure the classification performance according to varyingsizes of training set (examples randomly chosen from the original training dataset). Performance isaveraged over three different random trials.7Under review as a conference paper at ICLR 2017(a) (b) Figure 5: Examples reconstructed from the perturbed latent vectors via (a) random perturbation,and (b) semantic perturbation (top row shows the original training examples). More examples aresummarized in Appendix (A4.1).Table 1 summarizes the classification performance for MNIST and CIFAR-10. As we expected,the base model obtained by maximizing the sum of mutual informations ( proposed-base ) mostlyperforms better than previous approaches, and the model with the semantic perturbation ( proposed-perturb (semantic) ) performs best among all the comparison targets. Especially in MNIST, the errorrate of ‘ proposed-perturb (semantic) ’ with 2k per-class training examples is less than the error rateof all types of previous works with the entire training set (approximately 5k per-class examples).We further verify the proposed method on the permutation-invariant MNIST task with a standardfeed-forward neural network. Classification performance is measured against three different sizes oftraining set (1k, 2k, and 5k per-class training examples). ‘ Proposed-perturb (semantic) ’ achieves thebest performance among all the configurations; 2.57%, 1.82%, and 1.28% error rates for 1k, 2k, and5k per-class training examples, respectively. The joint learning model with the input reconstructionloss performs best among three previous approaches; 2.72%, 1.97%, and 1.38% error rates for 1k,2k, and 5k per-class training examples, respectively.4.4 Q UALITATIVE ANALYSISAs mentioned before, random perturbation by adding unstructured noise directly to the latent rep-resentation cannot guarantee preserving the semantics of the original representation. We com-pared two different perturbation methods (random and semantic) by visualizing the examples recon-structed from the perturbed latent vectors (Figure 5). Top row is the original examples selected fromtraining set (among 2k per-class training examples), and the rest are the reconstructions of their per-turbed latent representations. Based on the architecture described in Figure 1(b), we generated fivedifferent perturbed latent representations according to the type of perturbation, and reconstructedthe perturbed latent vectors through decoding path for reconstruction.Figure 5(a) and (b) show the examples reconstructed from the random and semantic perturbations,respectively. For both cases, zero-mean Gaussian random noise (0.2 standard deviation) is used forperturbation. As shown in Figure 5(a), random perturbation partially destroys the original semantics;for example, semantics of ‘1’ is mostly destroyed under random perturbation, and some examplesof ‘3’ are reconstructed as being similar to ‘8’ rather than its original content ‘3’. Figure 5(b)shows the examples reconstructed from the semantic perturbation. The reconstructed examples showsubtle semantic variations while preserving the original semantic contents; for example, thicknessdifference in ‘3’ (example on the third row) or writing style difference in ‘8’ (openness of the topleft corner).Figure 6 shows the overall effect of the perturbation. In this analysis, 100 per-class MNIST exam-ples are used for training. From the trained model based on the architecture described in Figure 1(b),latent representations zof all the 50k examples (among 50k examples, only 1k examples were usedfor training) are visualized by using t-SNE (Maaten & Hinton, 2008). Only the training examples ofthree classes (0, 1, and 9) among ten classes are depicted as black circles for visual discrimination in8Under review as a conference paper at ICLR 2017(a) 0123456789(b) (c) Figure 6: Training examples (circles or crosses with colors described below) over the examplesnot used for training (depicted as background with different colors); (a) training examples (blackcircles), (b) training examples (yellow circles) with 3 random-perturbed samples (blue crosses),and (c) training examples (yellow circles) with 3 semantic-perturbed samples (blue crosses). Bestviewed in color.Figure 6(a). The rest of the examples which were not used for training (approximately 4.9k exam-ples per class) are depicted as a background with different colors. We treat the colored backgroundexamples (not used for training) as a true distribution of unseen data in order to estimate the gener-alization level of learned representation according to the type of perturbation. Figure 6(b) and (c)show the training examples (100 examples per class with yellow circles) and their perturbed ones(3sampled from each example with blue crosses) through random and semantic perturbations,respectively.In Figure 6(b), perturbed samples are distributed near the original training examples, but some sam-ples outside the true distribution cannot be identified easily with appropriate classes. This can beexplained with Figure 5(a), since some perturbed samples are ambiguous semantically. In Fig-ure 6(c), however, most of the perturbed samples evenly cover the true distribution. As mentionedbefore, stochastic perturbation with the semantic additive noise during training implicitly incurs theeffect of augmentation on the latent space while resulting in better generalization. Per-class t-SNEresults are summarized in Appendix (A4.2).5 D ISCUSSIONWe introduced a novel latent space modeling method for supervised tasks based on the standardfeed-forward neural network architecture. The presented model simultaneously optimizes both su-pervised and unsupervised losses based on the assumption that the better latent representation canbe obtained by maximizing the sum of hierarchical mutual informations. Especially the stochas-tic perturbation process which is achieved by modeling the semantic additive noise during trainingenhances the representational power of the latent space. From the proposed semantic noise model-ingprocess, we can expect improvement of generalization performance in supervised learning withimplicit semantic augmentation effect on the latent space.The presented model architecture can be intuitively extended to semi-supervised learning becauseit is implemented as the joint optimization of supervised and unsupervised objectives. For semi-supervised learning, however, logical link between features learned from labelled and unlabelleddata needs to be considered additionally. We leave the extension of the presented approach to semi-supervised learning for the future.REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-9Under review as a conference paper at ICLR 2017cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In International Conference on Learning Representations (ICLR) ,2015.Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise trainingof deep networks. In Advances in Neural Information Processing Systems (NIPS) , 2007.Kyunghyun Cho and Xi Chen. Classifying and visualizing motion capture sequences using deepneural networks. In International Conference on Computer Vision Theory and Applications , 2014.Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deepneural networks with multitask learning. In International Conference on Machine Learning(ICML) , 2008.Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep boltz-mann machines. In Advances in Neural Information Processing Systems (NIPS) , 2013.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In International conference on acoustics, speech and signal processing ,2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In Computer Vision and Pattern Recognition (CVPR) , 2016.Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networksfor acoustic modeling in speech recognition: The shared views of four research groups. SignalProcessing Magazine, IEEE , 29(6):82–97, 2012.Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep beliefnets. Neural Computation , 18:1527–1554, 2006.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations (ICLR) , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008a.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008b.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Jonathan Masci, Ueli Meier, Dan Cires ̧an, and J ̈urgen Schmidhuber. Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial NeuralNetworks , 2011.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems(NIPS) , 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In Artificial Intelligenceand Statistics Conference (AISTATS) , 2009.10Under review as a conference paper at ICLR 2017Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. In International Conference on Learning Representations (ICLR) , 2015.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research (JMLR) , 11:3371–3408, 2010.Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of re-search and development , 4(1):66–82, 1960.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In International Conference on MachineLearning (ICML) , 2016.Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.InInternational Conference on Learning Representations (ICLR) , 2015.11Under review as a conference paper at ICLR 2017APPENDIX(A1) D ERIVATION OF RECONSTRUCTION ERRORS FROM CONDITIONAL ENTROPY TERMSExtended from Section 2. From the lower bound in Eq. (3), we consider the following optimizationproblem (refer to ‘ Section 2. From mutual information to autoencoders ’ in (Vincent et al., 2010)):maxf1;01;2;02gEq(X;Z;Y )[logq(XjZ)] +Eq(X;Z;Y )[logq(ZjY)]: (13)Here, we denote q(X;Z;Y )an unknown joint distribution. Note that ZandYare respectivelythe variables transformed from parametric mappings Z=f1(X)andY=f2(Z)(see Fig. 1).q(X;Z;Y )then can be reduced to q(X)fromq(ZjX;1) =(Zf1(X))andq(YjZ;2) =(Yf2(Z))wheredenotes Dirac-delta function.From the Kullback-Leibler divergence that DKL(qjjp)0for any two distributions pandq, theoptimization in Eq. (13) corresponds to the following optimization problem where p()denotes aparametric distribution:maxf1;01;2;02gEq(X)[logp(XjZ;01)] +Eq(X)[logp(ZjY;02)]: (14)By replacing q(X)with a sample distribution q0(X)and putting all parametric dependencies be-tweenX,ZandY, we will havemaxf1;01;2;02gEq0(X)[logp(XjZ=f1(X);01)] +Eq0(X)[logp(ZjY=f2(f1(X));02)]:(15)For a given input sample xofX, it is general to interpret xRandzRas the parameters of distributionsp(XjXR=xR)andp(ZjZR=zR)which reconstruct xandzwith high probability (i.e. xRandzRare not exact reconstructions of xandz). SincexRandzRare real-valued, we assume Gaussiandistribution for these conditional distributions, that is,p(XjXR=xR) =N(xR; 20I)p(ZjZR=zR) =N(zR; 20I):(16)The assumptions yield logp(j)/LL2(;).With the following relations for logterms in Eq. (15),p(XjZ=f1(x);01) =p(XjXR=g01(f1(x)))p(ZjY=f2(f1(x));02) =p(ZjZR=g02(f2(f1(x)));(17)the optimization problem in Eq. (15) corresponds to the minimization problem of reconstructionerrors for input examples x(i)as below:minf1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R): (18)12Under review as a conference paper at ICLR 2017(A2) L ADDER NETWORK ,A REPRESENTATIVE SEMI -SUPERVISED LEARNING MODELExtended from Section 3. We performed experiments with a ladder network model (Rasmus et al.,2015) in order to estimate the performance on pure supervised tasks according to different sizes oftraining set. We used the code (https://github.com/rinuboney/ladder.git) for this experiment. Thenetwork architecture implemented on the source code is used as is; (784-1000-500-250-250-250-10). Based on the same network architecture, we implemented the proposed stochastic perturbationmodel described in Figure 1(c) and compared the classification performance with the ladder networkas described in Table 2 (we did not focus on searching the optimal hyperparameters for the proposedmodel in this experiment). As summarized in the bottom of the table (mean over 3 random trials),the proposed semantic noise modeling method shows a fairly large performance gain compared tothe ladder network model with small-scale datasets (e.g., in a case of 10 per-class training examples,the proposed method achieves 22.11% of error rate, while the ladder network shows 29.66%).Table 2: Classification performance (error rate in %) of the ladder network and the proposed modelon three different sets of randomly chosen training examples (MNIST).set No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91proposed-perturb (semantic); Figure 1(c) 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37proposed-perturb (semantic); Figure 1(c) 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47proposed-perturb (semantic); Figure 1(c) 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91proposed-perturb (semantic); Figure 1(c) 22.11 13.89 9.01 6.11 4.45 2.95 1.99 1.40 0.9313Under review as a conference paper at ICLR 2017(A3) Q UANTITATIVE ANALYSISExtended from Section 4.3. Among the total 50k and 40k training examples in MNIST and CIFAR-10, we randomly select the examples for training. Classification performance according to threedifferent randomly chosen training sets are summarized in Table 3 (MNIST) and Table 4 (CIFAR-10). Further experiments with denoising constraints are also included. Zero-mean Gaussian randomnoise with 0.1 standard deviation is used for noise injection. Denoising function helps to achieveslightly better performance on MNIST, but it results in performance degradation on CIFAR-10 (wedid not focus on searching the optimal parameters for noise injection in this experiments).Table 3: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (MNIST).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 5kfeed-forward model; Figure 2(a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04joint learning model with recon-one; Figure 2(b) 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12joint learning model with recon-one with denoising constraints 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97joint learning model with recon-all; Figure 2(b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36proposed-base; Figure 1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13proposed-perturb (random); Figure 1(c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65proposed-perturb (semantic); Figure 1(c) 19.33 9.72 5.98 3.47 2.84 1.84 1.16 0.84 0.62Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 28.84 17.36 10.14 6.20 4.78 3.02 1.61 1.41joint learning model with recon-one; Figure 2(b) 26.09 14.40 7.98 5.18 4.17 2.29 1.94 1.52joint learning model with recon-one with denoising constraints 27.69 13.11 6.95 5.07 3.54 2.37 1.83 1.28joint learning model with recon-all; Figure 2(b) 24.01 14.13 8.98 6.84 5.44 3.51 2.98 2.18joint learning model with recon-all with denoising constraints 23.05 13.29 7.79 5.12 3.92 3.01 2.27 1.84proposed-base; Figure 1(b) 22.95 12.98 6.27 4.43 3.22 2.14 1.37 0.96proposed-base with denoising constraints 26.96 12.21 6.45 4.62 3.13 2.53 1.88 1.49proposed-perturb (random); Figure 1(c) 22.10 12.52 5.97 4.26 2.86 1.94 1.23 0.92proposed-perturb (semantic); Figure 1(c) 21.22 11.52 5.75 3.91 2.61 1.73 1.14 0.89Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 22.20 16.43 9.67 7.16 5.02 3.17 2.25 1.39joint learning model with recon-one; Figure 2(b) 20.23 14.19 7.73 5.96 4.22 2.62 1.79 1.35joint learning model with recon-one with denoising constraints 19.32 12.25 7.44 5.39 3.58 2.37 1.49 1.56joint learning model with recon-all; Figure 2(b) 17.51 14.12 9.12 7.04 5.49 4.05 3.08 2.25joint learning model with recon-all with denoising constraints 17.07 12.50 7.86 5.48 4.05 2.97 2.02 1.98proposed-base; Figure 1(b) 20.86 11.79 6.25 4.63 2.96 1.91 1.16 0.96proposed-base with denoising constraints 19.89 11.30 6.26 4.57 3.50 2.63 1.61 1.47proposed-perturb (random); Figure 1(c) 20.02 11.94 6.12 4.32 3.13 1.81 1.28 1.08proposed-perturb (semantic); Figure 1(c) 19.78 10.53 6.03 4.00 2.70 1.76 1.14 0.9214Under review as a conference paper at ICLR 2017Table 4: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (CIFAR-10).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 4kfeed-forward model; Figure 2(a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80joint learning model with recon-one; Figure 2(b) 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73joint learning model with recon-all; Figure 2(b) 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41proposed-base; Figure 1(b) 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34proposed-perturb (random); Figure 1(c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43proposed-perturb (semantic); Figure 1(c) 71.59 66.90 58.64 52.34 42.74 30.94 24.45 20.10 16.16Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 72.39 69.49 60.45 54.85 46.91 33.39 26.73 21.00joint learning model with recon-one; Figure 2(b) 74.06 69.14 60.71 54.54 45.70 33.54 27.43 20.90joint learning model with recon-one with denoising constraints 76.40 69.33 60.28 55.38 47.40 36.29 29.31 24.60joint learning model with recon-all; Figure 2(b) 72.28 67.60 61.53 56.65 49.99 42.08 32.99 26.33joint learning model with recon-all with denoising constraints 73.90 69.23 61.90 57.99 52.35 45.12 37.23 30.14proposed-base; Figure 1(b) 72.49 65.62 57.82 52.66 43.20 32.24 25.60 21.32proposed-base with denoising constraints 72.99 66.75 57.78 53.81 44.33 33.56 28.40 25.03proposed-perturb (random); Figure 1(c) 71.84 65.98 58.08 53.37 43.44 31.56 25.69 21.03proposed-perturb (semantic); Figure 1(c) 72.85 66.65 57.44 52.21 42.74 31.17 24.99 20.54Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 75.78 68.24 61.02 54.29 46.28 33.38 26.11 20.85joint learning model with recon-one; Figure 2(b) 77.79 67.62 61.37 55.22 45.96 33.21 26.29 21.81joint learning model with recon-one with denoising constraints 76.60 69.27 61.13 55.10 47.50 37.12 29.63 24.88joint learning model with recon-all; Figure 2(b) 72.92 66.97 63.31 56.23 50.16 41.41 33.75 26.31joint learning model with recon-all with denoising constraints 76.83 68.53 65.58 58.29 52.43 45.42 39.01 32.32proposed-base; Figure 1(b) 71.60 66.31 58.99 52.30 43.88 31.10 25.48 20.95proposed-base with denoising constraints 72.39 67.20 60.60 52.64 44.62 33.52 28.01 25.25proposed-perturb (random); Figure 1(c) 71.34 67.15 59.55 52.86 43.81 32.01 25.78 20.42proposed-perturb (semantic); Figure 1(c) 70.06 67.07 58.83 52.41 43.47 30.61 25.00 19.9415Under review as a conference paper at ICLR 2017(A4.1) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 7 shows reconstructed examples from perturbed (random orsemantic) latent representations (refer to Figure 5 and the analysis described in Section 4.4).Example.1 random perturbation Example.1 semantic perturbation Example.2 random perturbation Example.2 semantic perturbation Figure 7: For each example, top row is the original examples selected from the training set, andthe rest are reconstructed from the perturbed representations via random (left) and semantic (right)perturbations.16Under review as a conference paper at ICLR 2017(A4.2) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 8 shows the t-SNE results per class on MNIST. The overalltendency is similar to the description in Section 4.4.17Under review as a conference paper at ICLR 2017Figure 8: From top to bottom: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. From left to right: training exam-ples (circle), training examples (circle) + random-perturbed samples (cross), and training examples(circle) + semantic-perturbed samples (cross). Best viewed in color.18
rykoEh84g
SyCSsUDee
ICLR.cc/2017/conference/-/paper44/official/review
{"title": "unclear relation between the total correlation maximization idea, and the actual training scheme based on local reconstructions", "rating": "3: Clear rejection", "review": "The paper presents a new regularization technique for neural networks, which seeks to maximize correlation between input variables, latent variables and outputs. This is achieved by defining a measure of total correlation between these variables and decomposing it in terms of entropies and conditional entropies.\n\nAuthors explain that they do not actually maximize the total correlation, but a lower-bound of it that ignores simple entropy terms, and only considers conditional entropies. It is not clearly explained what is the rationale for discarding these entropy terms.\n\nEntropies measures are applying to probability distributions (i.e. this implies that the variables in the model should be random). The link between the conditional entropy formulation and the reconstruction error is not made explicit. In order to link these two views, I would have expected, for example, a noise model for the units of the network.\n\nLater in the paper, it is claimed that the original ladder network is not suitable for supervised learning with small samples, and some empirical results seek to demonstrate this. But a more theoretical explanation why it is the case would have been welcome.\n\nThe MNIST results are shown for a particular convolutional neural network architecture, however, most ladder network results for this dataset have been produced on standard fully-connected architectures. Results for such neural network architecture would have been desirable for more comparability with original ladder neural network results.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Semantic Noise Modeling for Better Representation Learning
["Hyo-Eun Kim", "Sangheum Hwang", "Kyunghyun Cho"]
Latent representation learned from multi-layered neural networks via hierarchical feature abstraction enables recent success of deep learning. Under the deep learning framework, generalization performance highly depends on the learned latent representation. In this work, we propose a novel latent space modeling method to learn better latent representation. We designed a neural network model based on the assumption that good base representation for supervised tasks can be attained by maximizing the sum of hierarchical mutual informations between the input, latent, and output variables. From this base model, we introduce a semantic noise modeling method which enables semantic perturbation on the latent space to enhance the representational power of learned latent feature. During training, latent vector representation can be stochastically perturbed by a modeled additive noise while preserving its original semantics. It implicitly brings the effect of semantic augmentation on the latent space. The proposed model can be easily learned by back-propagation with common gradient-based optimization algorithms. Experimental results show that the proposed method helps to achieve performance benefits against various previous approaches. We also provide the empirical analyses for the proposed latent space modeling method including t-SNE visualization.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=SyCSsUDee
https://openreview.net/pdf?id=SyCSsUDee
https://openreview.net/forum?id=SyCSsUDee&noteId=rykoEh84g
Under review as a conference paper at ICLR 2017SEMANTIC NOISE MODELING FORBETTER REPRESENTATION LEARNINGHyo-Eun Kimand Sangheum HwangLunit Inc.Seoul, South Koreafhekim, shwang g@lunit.ioKyunghyun ChoCourant Institute of Mathematical Sciences and Centre for Data ScienceNew York UniversityNew York, NY 10012, USAkyunghyun.cho@nyu.eduABSTRACTLatent representation learned from multi-layered neural networks via hierarchicalfeature abstraction enables recent success of deep learning. Under the deep learn-ing framework, generalization performance highly depends on the learned latentrepresentation. In this work, we propose a novel latent space modeling method tolearn better latent representation. We designed a neural network model based onthe assumption that good base representation for supervised tasks can be attainedby maximizing the sum of hierarchical mutual informations between the input,latent, and output variables. From this base model, we introduce a semantic noisemodeling method which enables semantic perturbation on the latent space to en-hance the representational power of learned latent feature. During training, latentvector representation can be stochastically perturbed by a modeled additive noisewhile preserving its original semantics. It implicitly brings the effect of semanticaugmentation on the latent space. The proposed model can be easily learned byback-propagation with common gradient-based optimization algorithms. Experi-mental results show that the proposed method helps to achieve performance ben-efits against various previous approaches. We also provide the empirical analysesfor the proposed latent space modeling method including t-SNE visualization.1 I NTRODUCTIONEnhancing the generalization performance against unseen data given some sample data is the mainobjective in machine learning. Under that point of view, deep learning has been achieved manybreakthroughs in several domains such as computer vision (Krizhevsky et al., 2012; Simonyan &Zisserman, 2015; He et al., 2016), natural language processing (Collobert & Weston, 2008; Bah-danau et al., 2015), and speech recognition (Hinton et al., 2012; Graves et al., 2013). Deep learningis basically realized on deep layered neural network architecture, and it learns appropriate task-specific latent representation based on given training data. Better latent representation learned fromtraining data results in better generalization over the future unseen data. Representation learningor latent space modeling becomes one of the key research topics in deep learning. During the pastdecade, researchers focused on unsupervised representation learning and achieved several remark-able landmarks on deep learning history (Vincent et al., 2010; Hinton et al., 2006; Salakhutdinov &Hinton, 2009). In terms of utilizing good base features for supervised learning, the base representa-tion learned from unsupervised learning can be a good solution for supervised tasks (Bengio et al.,2007; Masci et al., 2011).The definition of ‘good’ representation is, however, different according to target tasks. In unsuper-vised learning, a model is learned from unlabelled examples. Its main objective is to build a modelCorresponding author1Under review as a conference paper at ICLR 2017to estimate true data distribution given examples available for training, so the learned latent rep-resentation normally includes broadly-informative components of the raw input data (e.g., mutualinformation between the input and the latent variable can be maximized for this objective). In su-pervised learning, however, a model is learned from labelled examples. In the case of classification,a supervised model learns to discriminate input data in terms of the target task using correspond-ing labels. Latent representation is therefore obtained to maximize the performance on the targetsupervised tasks.Since the meaning of good representations vary according to target tasks (unsupervised or super-vised), pre-trained features from the unsupervised model are not be guaranteed to be useful forsubsequent supervised tasks. Instead of the two stage learning strategy (unsupervised pre-trainingfollowed by supervised fine-tuning), several works focused on a joint learning model which opti-mizes unsupervised and supervised objectives concurrently, resulting in better generalization per-formance (Goodfellow et al., 2013; Larochelle & Bengio, 2008a; Rasmus et al., 2015; Zhao et al.,2015; Zhang et al., 2016; Cho & Chen, 2014).In this work, we propose a novel latent space modeling method for supervised learning as an exten-sion of the joint learning approach. We define a good latent representation of standard feed-forwardneural networks under the basis of information theory. Then, we introduce a semantic noise model-ingmethod in order to enhance the generalization performance. The proposed method stochasticallyperturbs the latent representation of a training example by injecting a modeled semantic additivenoise. Since the additive noise is randomly sampled from a pre-defined probability distribution ev-ery training iteration, different latent vectors from a single training example can be fully utilizedduring training. The multiple different latent vectors produced from a single training example aresemantically similar under the proposed latent space modeling method, so we can expect semanticaugmentation effect on the latent space.Experiments are performed on two datasets; MNIST and CIFAR-10. The proposed model results inbetter classification performance compared to previous approaches through notable generalizationeffect (stochastically perturbed training examples well cover the distribution of unseen data).2 M ETHODOLOGYThe proposed method starts from the existing joint learning viewpoint. This section first explainsthe process of obtaining a good base representation for supervised learning which is the basis of theproposed latent space modeling method. And then, we will describe how the proposed semanticnoise modeling method perturbs the latent space while maintaining the original semantics.2.1 B ASE JOINT LEARNING MODELIn a traditional feed-forward neural network model (Figure 1(a)), output Yof input data Xis com-pared with its true label, and the error is propagated backward from top to bottom, which implicitlylearns a task-specific latent representation Zof the input X. As an extension of a joint learningapproach, an objective to be optimized can be described in general as below (Larochelle & Bengio,2008b):minLunsup +Lsup (1)whereLunsup andLsupare respectively an unsupervised loss and a supervised loss, and andare model parameters to be optimized during training and a loss weighting coefficient, respectively.In terms of modeling Lunsup in Eq. (1), we assume that good latent representation Zis attainedby maximizing the sum of hierarchical mutual informations between the input, latent, and outputvariables; i.e. the sum of the mutual information between the input Xand theZand the mutualinformation between the Zand the output Y. Each mutual information is decomposed into anentropy and a conditional entropy terms, so the sum of hierarchical mutual informations is expressedas follows:I(X;Z) +I(Z;Y) =H(X)H(XjZ) +H(Z)H(ZjY) (2)2Under review as a conference paper at ICLR 2017X Z YX Z YXR ZRX Z YXR ZR ZP YP(a) (b) (c) Figure 1: (a) Standard feed-forward neural network model, (b) feed-forward neural network modelwith reconstruction paths, and (c) feed-forward neural network model with reconstruction andstochastic perturbation paths.where I(;)is the mutual information between random variables, and H()andH(j)are the entropyand the conditional entropy of random variables, respectively. Note that the sum of those mutualinformations becomes equivalent to the total correlation of X,Z, andYunder the graphical structureof the general feed-forward model described in Figure 1(a); P(X;Z;Y ) =P(YjZ)P(ZjX)P(X).The total correlation is equal to the sum of all pairwise mutual informations (Watanabe, 1960).Our objective is to find the model parameters which maximize I(X;Z) +I(Z;Y). Since H(X)andH(Z)are non-negative, and H(X)is constant in this case, the lower bound on I(X;Z) +I(Z;Y)can be reduced to1:I(X;Z) +I(Z;Y)H(XjZ)H(ZjY): (3)It is known that maximizing H(XjZ)can be formulated as minimizing the reconstruction errorbetween the input x(i)(i-th example sampled from X) and its reconstruction x(i)Runder the generalaudo-encoder framework (Vincent et al., 2010). Since H(XjZ) +H(ZjY)is proportional to thesum of reconstruction errors of x(i)(with its reconstruction x(i)R) andz(i)(with its reconstructionz(i)R), the target objective can be expressed as follows (refer to Appendix (A1) for the details ofmathematical derivations):minXiLrec(x(i);x(i)R) +Lrec(z(i);z(i)R) (4)whereLrecis a reconstruction loss.Figure 1(b) shows the target model obtained from the assumption that good latent representation Zcan be obtained by maximizing the sum of hierarchical mutual informations. Given an input samplex, feed-forward vectors and their reconstructions are attained deterministically by:z=f1(x)y=f2(f1(x))xR=g01(z) =g01(f1(x))zR=g02(y) =g02(f2(f1(x)):(5)1Although H(Z)is an upper bound of H(ZjY),H(Z)is anyway affected by the process of H(ZjY)beingminimized in Eq. (3). In Section 4, we experimentally show that we can obtain good base model even from therelatively loose lower bound defined in Eq. (3).3Under review as a conference paper at ICLR 2017Given a set of training pairs ( x(i),t(i)) wherex(i)andt(i)are thei-th input example and its label,target objective in Eq. (1) under the model described in Figure 1(b) can be organized as below (withreal-valued input samples, L2 loss LL2is a proper choice for the reconstruction loss Lrec):min:f1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R)+LNLL(y(i);t(i)) (6)whereLNLL is a negative log-likelihood loss for the target supervised task. Note that Eq. (6)represents the ‘ proposed-base ’ in our experiment (see Section 4.3).2.2 S EMANTIC NOISE MODELINGBased on the architecture shown in Figure 1(b) with the target objective in Eq. (6), we conjecturethat stochastic perturbation on the latent space during training helps to achieve better generalizationperformance for supervised tasks. Figure 1(c) shows this strategy which integrates the stochasticperturbation process during training. Suppose that ZPis a perturbed version of Z, andYPis anoutput which is feed-forwarded from ZP. Given a latent vector z=f1(x)from an input sample x,z0=z+zeand^y=f2(z0) (7)wherez0and^yare a perturbed latent vector and its output respectively, and zeis an additive noiseused in the perturbation process of z. Based on the architecture shown in Figure 1(c), target objectivecan be modified as:min:f1;01;2;02gXi1LL2(x(i);x(i)R) +LL2(z(i);z(i)R)+2LNLL(y(i);t(i)) +LNLL(^y(i);t(i)):(8)Using random additive noise directly on zeis the most intuitive approach (‘ proposed-perturb (ran-dom) ’ in Section 4.3). However, preserving the semantics of the original latent representation zcannot be guaranteed under the direct random perturbation on the latent space. While the latentspace is not directly interpretable in general, the output logit yof the latent representation zis inter-pretable, because the output logit is tightly coupled to the prediction of the target label. In order topreserve the semantics of the original latent representation after perturbation, we indirectly model asemantic noise on the latent space by adding small random noise directly on the output space.Based on the output (pre-softmax) logit y, the semantic-preserving variation of y(i.e.y0) can bemodeled by y0=y+ye, whereyeis a random noise vector stochastically sampled from a zero-mean Gaussian with small standard deviation ;N(0;2I). Now, the semantic perturbation z0canbe reconstructed from the random perturbation y0through the decoding path g02in Figure 1(c).From the original output logit yand the randomly perturbed output logit y0, semantic additive noisezeon the latent space can be approximately modeled as below:zR=g02(y)z0R=g02(y0) =g02(y+ye)ze'z0RzR=g02(y+ye)g02(y)(9)By using the modeled semantic additive noise zeand the original latent representation z, we canobtain the semantic perturbation z0as well as its output ^yvia Eq. (7) for our target objective Eq. (8).From the described semantic noise modeling process (‘ proposed-perturb (semantic) ’ in Section 4.3),we expect to achieve better representation on the latent space. The effect of the proposed model interms of learned latent representation will be explained in more detail in Section 4.4.4Under review as a conference paper at ICLR 2017(a) (b) Figure 2: Previous works for supervised learning; (a) traditional feed-forward model, and (b) jointlearning model with both supervised and unsupervised losses.3 R ELATED WORKSPrevious works on deep neural networks for supervised learning can be categorized into two types asshown in Figure 2; (a) a general feed-forward neural network model (LeCun et al., 1998; Krizhevskyet al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), and (b) a joint learning model whichoptimizes unsupervised and supervised objectives at the same time (Zhao et al., 2015; Zhang et al.,2016; Cho & Chen, 2014). Here are the corresponding objective functions:min:f1;2gXiLNLL(y(i);t(i)) (10)min:f1;01;2gXiLL2(x(i);x(i)R) +LNLL(y(i);t(i)) (11)whereis a loss weighting coefficient between unsupervised and supervised losses.Since the feed-forward neural network model is normally implemented with multiple layers in adeep learning framework, the joint learning model can be sub-classified into two types according tothe type of reconstruction; reconstruction only with the input data x(Eq. (11)) and reconstructionwith all the intermediate features including the input data xas follows:minXi0@0LL2(x(i);x(i)R) +XjjLL2(h(i)j;h(i)jR) +LNLL(y(i);t(i))1A: (12)whereh(i)jandh(i)jRare thej-th hidden representation of the i-th training example and its reconstruc-tion.Another type of the joint learning model, a ladder network (Figure 3), was introduced for semi-supervised learning (Rasmus et al., 2015). The key concept of the ladder network is to obtainrobust features by learning de-noising functions ( g0) of the representations at every layer of themodel via reconstruction losses, and the supervised loss is combined with the reconstruction lossesin order to build the semi-supervised model. The ladder network achieved the best performance insemi-supervised tasks, but it is not appropriate for supervised tasks with small-scale training set (ex-perimental analysis for supervised learning on permutation-invariant MNIST is briefly summarized+ noise + noise Figure 3: Ladder network; a representative model for semi-supervised learning (Rasmus et al.,2015).5Under review as a conference paper at ICLR 2017in Appendix (A2)). The proposed model in this work can be extended to semi-supervised learning,but our main focus is to enhance the representational power on latent space given labelled data forsupervised learning. We leave the study for semi-supervised learning scenario based on the proposedmethodology as our future research.4 E XPERIMENTSFor quantitative analysis, we compare the proposed methodology with previous approaches de-scribed in Section 3; a traditional feed-forward supervised learning model and a joint learning modelwith two different types of reconstruction losses (reconstruction only with the first layer or with allthe intermediate layers including the first layer). The proposed methodology includes a baselinemodel in Figure 1(b) as well as a stochastic perturbation model in Figure 1(c). Especially in thestochastic perturbation model, we compare the random and semantic perturbations and present somequalitative analysis on the meaning of the proposed perturbation methodology.4.1 D ATASETSWe experiment with two public datasets; MNIST (including a permutation-invariant MNIST case)and CIFAR-10. MNIST (10 classes) consists of 50k, 10k, and 10k 28 28 gray-scale images fortraining, validation, and test datasets, respectively. CIFAR-10 (10 classes) consists of 50k and 10k3232 3-channel images for training and test sets, respectively. We split the 50k CIFAR-10 trainingimages into 40k and 10k for training and validation. Experiments are performed with differentsizes of training set (from 10 examples per class to the entire training set) in order to verify theeffectiveness of the proposed model in terms of generalization performance under varying sizes oftraining set.4.2 I MPLEMENTATIONFigure 4 shows the architecture of the neural network model used in this experiment. W’s areconvolution or fully-connected weights (biases are excluded for visual brevity). Three convolution(33 (2) 32, 33 (2) 64, 33 (2) 96, where each item means the filter kernel size and (stride)with the number of filters) and two fully-connected (the numbers of output nodes are 128 and 10,respectively) layers are used for MNIST. For the permutation-invariant MNIST setting, 784-512-256-256-128-10 nodes of fully-connected layers are used. Four convolution (5 5 (1) 64, 33 (2)64, 33 (2) 64, and 33 (2) 96) and three fully-connected (128, 128, and 10 nodes) layers are usedfor CIFAR-10. Weights on the decoding (reconstruction) path are tied with corresponding weightson the encoding path as shown in Figure 4 (transposed convolution for the tied convolution layerand transposed matrix multiplication for the tied fully-connected layer).In Figure 4, z0is perturbed directly from zby adding Gaussian random noise for random pertur-bation. For semantic perturbation, z0is indirectly generated from y0which is perturbed by addingGaussian random noise on ybased on Eq. (9). For perturbation, base activation vector ( zis the baseFigure 4: Target network architecture; 3 convolution and 2 fully-connected layers were used forMNIST, 5 fully-connected layers were used for permutation-invariant MNIST, and 4 convolutionand 3 fully-connected layers were used for CIFAR-10.6Under review as a conference paper at ICLR 2017Table 1: Error rate (%) on the test set using the model with the best performance on the validationset. Numbers on the first row of each sub-table are the number of randomly chosen per-class train-ing examples. The average performance and the standard deviation of three different random-splitdatasets (except for the case using the entire training set in the last column) are described in this table(error rate on each random set is summarized in Appendix (A3)). Performance of three previous ap-proaches (with gray background; previous-1, 2, 3 are feed-forward model Figure 2(a), joint learningmodel with recon-one Figure 2(b), joint learning model with recon-all Figure 2(b), respectively) andthe proposed methods (proposed-1, 2, 3 are baseline Figure 1(b), random perturbation Figure 1(c),semantic perturbation Figure 1(c), respectively) is summarized.dataset number of per-class examples chosen from 50k entire MNIST training examples entire setMNIST 10 20 50 100 200 500 1k 2k 50kprevious-1 24.55 (3.04) 16.00 (1.33) 10.35 (0.66) 6.58 (0.42) 4.71 (0.28) 2.94 (0.23) 1.90 (0.27) 1.45 (0.08) 1.04previous-2 21.67 (3.19) 13.60 (0.99) 7.85 (0.10) 5.44 (0.37) 4.14 (0.08) 2.50 (0.15) 1.84 (0.07) 1.45 (0.07) 1.12previous-3 20.11 (2.81) 13.69 (0.62) 9.15 (0.15) 6.77 (0.25) 5.39 (0.11) 3.89 (0.27) 2.91 (0.17) 2.28 (0.10) 1.87proposed-1 21.35 (1.16) 11.65 (1.15) 6.33 (0.10) 4.32 (0.31) 3.07 (0.11) 1.98 (0.11) 1.29 (0.09) 0.94 (0.02) 0.80proposed-2 20.17 (1.52) 11.68 (0.81) 6.24 (0.29) 4.12 (0.24) 3.04 (0.13) 1.88 (0.05) 1.24 (0.03) 0.96 (0.08) 0.65proposed-3 20.11 (0.81) 10.59 (0.74) 5.92 (0.12) 3.79 (0.23) 2.72 (0.09) 1.78 (0.05) 1.15 (0.01) 0.88 (0.03) 0.62dataset number of per-class examples chosen from 40k entire CIFAR-10 training examples entire setCIFAR-10 10 20 50 100 200 500 1k 2k 40kprevious-1 73.82 (1.43) 68.99 (0.54) 61.30 (0.83) 54.93 (0.56) 46.97 (0.59) 33.69 (0.43) 26.63 (0.39) 20.97 (0.09) 17.80previous-2 75.68 (1.56) 69.05 (1.13) 61.44 (0.63) 55.02 (0.34) 46.18 (0.51) 33.62 (0.38) 26.78 (0.48) 21.25 (0.40) 17.68previous-3 73.33 (1.06) 67.63 (0.56) 62.59 (0.76) 56.37 (0.20) 50.51 (0.61) 41.26 (0.73) 32.55 (1.20) 26.38 (0.08) 22.71proposed-1 71.63 (0.69) 66.17 (0.40) 58.91 (0.86) 52.65 (0.28) 43.46 (0.30) 31.86 (0.54) 25.76 (0.31) 21.06 (0.18) 17.45proposed-2 71.69 (0.25) 66.75 (0.54) 58.95 (0.63) 53.01 (0.26) 43.71 (0.19) 31.80 (0.18) 25.50 (0.33) 20.81 (0.27) 17.43proposed-3 71.50 (1.14) 66.87 (0.17) 58.30 (0.62) 52.32 (0.08) 42.98 (0.34) 30.91 (0.23) 24.81 (0.26) 20.19 (0.25) 16.16vector for the random perturbation and yis the base vector for the semantic perturbation) is scaled to[0.0, 1.0], and the zero-mean Gaussian noise with 0.2 of standard deviation is added (via element-wise addition) on the normalized base activation. This perturbed scaled activation is de-scaled withthe original min and max activations of the base vector.Initial learning rates are 0.005 and 0.001 for MNIST and permutation-invariant MNIST, and 0.002for CIFAR-10, respectively. The learning rates are decayed by a factor of 5 every 40 epochs until the120-th epoch. For both datasets, the minibatch size is set to 100, and the target objective is optimizedusing Adam optimizer (Kingma & Ba, 2015) with a momentum 0.9. All the ’s for reconstructionlosses in Eq. (11) and Eq. (12) are 0.03 and 0.01 for MNIST and CIFAR-10, respectively. The sameweighting factors for reconstruction losses (0.03 for MNIST and 0.01 for CIFAR-10) are used for1in Eq (8), and 1.0 is used for 2.Input data is first scaled to [0.0, 1.0] and then whitened by the average across all the training exam-ples. In CIFAR-10, random cropping (24 24 image is randomly cropped from the original 32 32image) and random horizontal flipping (mirroring) are used for data augmentation. We selectedthe network that performed best on the validation dataset for evaluation on the test dataset. All theexperiments are performed with TensorFlow (Abadi et al., 2015).4.3 Q UANTITATIVE ANALYSISThree previous approaches (a traditional feed-forward model, a joint learning model with the inputreconstruction loss, and a joint learning model with reconstruction losses of all the intermediatelayers including the input layer) are compared with the proposed methods (the baseline model inFigure 1(b), and the stochastic perturbation model in Figure 1(c) with two different perturbationmethods; random and semantic). We measure the classification performance according to varyingsizes of training set (examples randomly chosen from the original training dataset). Performance isaveraged over three different random trials.7Under review as a conference paper at ICLR 2017(a) (b) Figure 5: Examples reconstructed from the perturbed latent vectors via (a) random perturbation,and (b) semantic perturbation (top row shows the original training examples). More examples aresummarized in Appendix (A4.1).Table 1 summarizes the classification performance for MNIST and CIFAR-10. As we expected,the base model obtained by maximizing the sum of mutual informations ( proposed-base ) mostlyperforms better than previous approaches, and the model with the semantic perturbation ( proposed-perturb (semantic) ) performs best among all the comparison targets. Especially in MNIST, the errorrate of ‘ proposed-perturb (semantic) ’ with 2k per-class training examples is less than the error rateof all types of previous works with the entire training set (approximately 5k per-class examples).We further verify the proposed method on the permutation-invariant MNIST task with a standardfeed-forward neural network. Classification performance is measured against three different sizes oftraining set (1k, 2k, and 5k per-class training examples). ‘ Proposed-perturb (semantic) ’ achieves thebest performance among all the configurations; 2.57%, 1.82%, and 1.28% error rates for 1k, 2k, and5k per-class training examples, respectively. The joint learning model with the input reconstructionloss performs best among three previous approaches; 2.72%, 1.97%, and 1.38% error rates for 1k,2k, and 5k per-class training examples, respectively.4.4 Q UALITATIVE ANALYSISAs mentioned before, random perturbation by adding unstructured noise directly to the latent rep-resentation cannot guarantee preserving the semantics of the original representation. We com-pared two different perturbation methods (random and semantic) by visualizing the examples recon-structed from the perturbed latent vectors (Figure 5). Top row is the original examples selected fromtraining set (among 2k per-class training examples), and the rest are the reconstructions of their per-turbed latent representations. Based on the architecture described in Figure 1(b), we generated fivedifferent perturbed latent representations according to the type of perturbation, and reconstructedthe perturbed latent vectors through decoding path for reconstruction.Figure 5(a) and (b) show the examples reconstructed from the random and semantic perturbations,respectively. For both cases, zero-mean Gaussian random noise (0.2 standard deviation) is used forperturbation. As shown in Figure 5(a), random perturbation partially destroys the original semantics;for example, semantics of ‘1’ is mostly destroyed under random perturbation, and some examplesof ‘3’ are reconstructed as being similar to ‘8’ rather than its original content ‘3’. Figure 5(b)shows the examples reconstructed from the semantic perturbation. The reconstructed examples showsubtle semantic variations while preserving the original semantic contents; for example, thicknessdifference in ‘3’ (example on the third row) or writing style difference in ‘8’ (openness of the topleft corner).Figure 6 shows the overall effect of the perturbation. In this analysis, 100 per-class MNIST exam-ples are used for training. From the trained model based on the architecture described in Figure 1(b),latent representations zof all the 50k examples (among 50k examples, only 1k examples were usedfor training) are visualized by using t-SNE (Maaten & Hinton, 2008). Only the training examples ofthree classes (0, 1, and 9) among ten classes are depicted as black circles for visual discrimination in8Under review as a conference paper at ICLR 2017(a) 0123456789(b) (c) Figure 6: Training examples (circles or crosses with colors described below) over the examplesnot used for training (depicted as background with different colors); (a) training examples (blackcircles), (b) training examples (yellow circles) with 3 random-perturbed samples (blue crosses),and (c) training examples (yellow circles) with 3 semantic-perturbed samples (blue crosses). Bestviewed in color.Figure 6(a). The rest of the examples which were not used for training (approximately 4.9k exam-ples per class) are depicted as a background with different colors. We treat the colored backgroundexamples (not used for training) as a true distribution of unseen data in order to estimate the gener-alization level of learned representation according to the type of perturbation. Figure 6(b) and (c)show the training examples (100 examples per class with yellow circles) and their perturbed ones(3sampled from each example with blue crosses) through random and semantic perturbations,respectively.In Figure 6(b), perturbed samples are distributed near the original training examples, but some sam-ples outside the true distribution cannot be identified easily with appropriate classes. This can beexplained with Figure 5(a), since some perturbed samples are ambiguous semantically. In Fig-ure 6(c), however, most of the perturbed samples evenly cover the true distribution. As mentionedbefore, stochastic perturbation with the semantic additive noise during training implicitly incurs theeffect of augmentation on the latent space while resulting in better generalization. Per-class t-SNEresults are summarized in Appendix (A4.2).5 D ISCUSSIONWe introduced a novel latent space modeling method for supervised tasks based on the standardfeed-forward neural network architecture. The presented model simultaneously optimizes both su-pervised and unsupervised losses based on the assumption that the better latent representation canbe obtained by maximizing the sum of hierarchical mutual informations. Especially the stochas-tic perturbation process which is achieved by modeling the semantic additive noise during trainingenhances the representational power of the latent space. From the proposed semantic noise model-ingprocess, we can expect improvement of generalization performance in supervised learning withimplicit semantic augmentation effect on the latent space.The presented model architecture can be intuitively extended to semi-supervised learning becauseit is implemented as the joint optimization of supervised and unsupervised objectives. For semi-supervised learning, however, logical link between features learned from labelled and unlabelleddata needs to be considered additionally. We leave the extension of the presented approach to semi-supervised learning for the future.REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-9Under review as a conference paper at ICLR 2017cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In International Conference on Learning Representations (ICLR) ,2015.Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise trainingof deep networks. In Advances in Neural Information Processing Systems (NIPS) , 2007.Kyunghyun Cho and Xi Chen. Classifying and visualizing motion capture sequences using deepneural networks. In International Conference on Computer Vision Theory and Applications , 2014.Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deepneural networks with multitask learning. In International Conference on Machine Learning(ICML) , 2008.Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep boltz-mann machines. In Advances in Neural Information Processing Systems (NIPS) , 2013.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In International conference on acoustics, speech and signal processing ,2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog-nition. In Computer Vision and Pattern Recognition (CVPR) , 2016.Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networksfor acoustic modeling in speech recognition: The shared views of four research groups. SignalProcessing Magazine, IEEE , 29(6):82–97, 2012.Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep beliefnets. Neural Computation , 18:1527–1554, 2006.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations (ICLR) , 2015.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo-lutional neural networks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008a.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International Conference on Machine Learning (ICML) , 2008b.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research (JMLR) , 9(Nov):2579–2605, 2008.Jonathan Masci, Ueli Meier, Dan Cires ̧an, and J ̈urgen Schmidhuber. Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial NeuralNetworks , 2011.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems(NIPS) , 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In Artificial Intelligenceand Statistics Conference (AISTATS) , 2009.10Under review as a conference paper at ICLR 2017Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imagerecognition. In International Conference on Learning Representations (ICLR) , 2015.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research (JMLR) , 11:3371–3408, 2010.Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of re-search and development , 4(1):66–82, 1960.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In International Conference on MachineLearning (ICML) , 2016.Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders.InInternational Conference on Learning Representations (ICLR) , 2015.11Under review as a conference paper at ICLR 2017APPENDIX(A1) D ERIVATION OF RECONSTRUCTION ERRORS FROM CONDITIONAL ENTROPY TERMSExtended from Section 2. From the lower bound in Eq. (3), we consider the following optimizationproblem (refer to ‘ Section 2. From mutual information to autoencoders ’ in (Vincent et al., 2010)):maxf1;01;2;02gEq(X;Z;Y )[logq(XjZ)] +Eq(X;Z;Y )[logq(ZjY)]: (13)Here, we denote q(X;Z;Y )an unknown joint distribution. Note that ZandYare respectivelythe variables transformed from parametric mappings Z=f1(X)andY=f2(Z)(see Fig. 1).q(X;Z;Y )then can be reduced to q(X)fromq(ZjX;1) =(Zf1(X))andq(YjZ;2) =(Yf2(Z))wheredenotes Dirac-delta function.From the Kullback-Leibler divergence that DKL(qjjp)0for any two distributions pandq, theoptimization in Eq. (13) corresponds to the following optimization problem where p()denotes aparametric distribution:maxf1;01;2;02gEq(X)[logp(XjZ;01)] +Eq(X)[logp(ZjY;02)]: (14)By replacing q(X)with a sample distribution q0(X)and putting all parametric dependencies be-tweenX,ZandY, we will havemaxf1;01;2;02gEq0(X)[logp(XjZ=f1(X);01)] +Eq0(X)[logp(ZjY=f2(f1(X));02)]:(15)For a given input sample xofX, it is general to interpret xRandzRas the parameters of distributionsp(XjXR=xR)andp(ZjZR=zR)which reconstruct xandzwith high probability (i.e. xRandzRare not exact reconstructions of xandz). SincexRandzRare real-valued, we assume Gaussiandistribution for these conditional distributions, that is,p(XjXR=xR) =N(xR; 20I)p(ZjZR=zR) =N(zR; 20I):(16)The assumptions yield logp(j)/LL2(;).With the following relations for logterms in Eq. (15),p(XjZ=f1(x);01) =p(XjXR=g01(f1(x)))p(ZjY=f2(f1(x));02) =p(ZjZR=g02(f2(f1(x)));(17)the optimization problem in Eq. (15) corresponds to the minimization problem of reconstructionerrors for input examples x(i)as below:minf1;01;2;02gXiLL2(x(i);x(i)R) +LL2(z(i);z(i)R): (18)12Under review as a conference paper at ICLR 2017(A2) L ADDER NETWORK ,A REPRESENTATIVE SEMI -SUPERVISED LEARNING MODELExtended from Section 3. We performed experiments with a ladder network model (Rasmus et al.,2015) in order to estimate the performance on pure supervised tasks according to different sizes oftraining set. We used the code (https://github.com/rinuboney/ladder.git) for this experiment. Thenetwork architecture implemented on the source code is used as is; (784-1000-500-250-250-250-10). Based on the same network architecture, we implemented the proposed stochastic perturbationmodel described in Figure 1(c) and compared the classification performance with the ladder networkas described in Table 2 (we did not focus on searching the optimal hyperparameters for the proposedmodel in this experiment). As summarized in the bottom of the table (mean over 3 random trials),the proposed semantic noise modeling method shows a fairly large performance gain compared tothe ladder network model with small-scale datasets (e.g., in a case of 10 per-class training examples,the proposed method achieves 22.11% of error rate, while the ladder network shows 29.66%).Table 2: Classification performance (error rate in %) of the ladder network and the proposed modelon three different sets of randomly chosen training examples (MNIST).set No.1 (# training examples per class) 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 25.85 16.48 9.26 6.00 4.66 3.07 2.15 1.26 0.91proposed-perturb (semantic); Figure 1(c) 19.76 12.33 8.77 6.06 4.59 2.93 1.87 1.31 0.93set No.2 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 33.14 17.46 10.44 6.67 4.43 2.82 1.94 1.37proposed-perturb (semantic); Figure 1(c) 23.36 15.35 9.43 5.75 4.43 2.99 1.87 1.39set No.3 (# training examples per class) 10 20 50 100 200 500 1k 2kladder network model; Figure 3 29.99 16.99 9.73 7.34 4.39 3.00 2.12 1.47proposed-perturb (semantic); Figure 1(c) 23.21 13.98 8.83 6.51 4.32 2.94 2.22 1.49mean over 3 random trials 10 20 50 100 200 500 1k 2k (all) 5kladder network model; Figure 3 29.66 16.98 9.81 6.67 4.49 2.96 2.07 1.37 0.91proposed-perturb (semantic); Figure 1(c) 22.11 13.89 9.01 6.11 4.45 2.95 1.99 1.40 0.9313Under review as a conference paper at ICLR 2017(A3) Q UANTITATIVE ANALYSISExtended from Section 4.3. Among the total 50k and 40k training examples in MNIST and CIFAR-10, we randomly select the examples for training. Classification performance according to threedifferent randomly chosen training sets are summarized in Table 3 (MNIST) and Table 4 (CIFAR-10). Further experiments with denoising constraints are also included. Zero-mean Gaussian randomnoise with 0.1 standard deviation is used for noise injection. Denoising function helps to achieveslightly better performance on MNIST, but it results in performance degradation on CIFAR-10 (wedid not focus on searching the optimal parameters for noise injection in this experiments).Table 3: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (MNIST).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 5kfeed-forward model; Figure 2(a) 22.61 14.20 11.25 6.37 4.34 2.63 1.83 1.56 1.04joint learning model with recon-one; Figure 2(b) 18.69 12.21 7.84 5.17 4.02 2.58 1.79 1.47 1.12joint learning model with recon-one with denoising constraints 20.39 11.91 7.41 4.64 3.65 2.57 1.97 1.53 0.97joint learning model with recon-all; Figure 2(b) 18.82 12.82 9.34 6.43 5.23 4.12 2.68 2.42 1.87joint learning model with recon-all with denoising constraints 17.93 11.76 7.32 4.78 3.91 3.04 2.52 1.99 1.36proposed-base; Figure 1(b) 20.23 10.18 6.47 3.89 3.04 1.89 1.33 0.91 0.80proposed-base with denoising constraints 19.88 10.89 6.62 4.26 3.40 2.44 2.11 1.54 1.13proposed-perturb (random); Figure 1(c) 18.38 10.58 6.64 3.78 3.14 1.90 1.21 0.89 0.65proposed-perturb (semantic); Figure 1(c) 19.33 9.72 5.98 3.47 2.84 1.84 1.16 0.84 0.62Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 28.84 17.36 10.14 6.20 4.78 3.02 1.61 1.41joint learning model with recon-one; Figure 2(b) 26.09 14.40 7.98 5.18 4.17 2.29 1.94 1.52joint learning model with recon-one with denoising constraints 27.69 13.11 6.95 5.07 3.54 2.37 1.83 1.28joint learning model with recon-all; Figure 2(b) 24.01 14.13 8.98 6.84 5.44 3.51 2.98 2.18joint learning model with recon-all with denoising constraints 23.05 13.29 7.79 5.12 3.92 3.01 2.27 1.84proposed-base; Figure 1(b) 22.95 12.98 6.27 4.43 3.22 2.14 1.37 0.96proposed-base with denoising constraints 26.96 12.21 6.45 4.62 3.13 2.53 1.88 1.49proposed-perturb (random); Figure 1(c) 22.10 12.52 5.97 4.26 2.86 1.94 1.23 0.92proposed-perturb (semantic); Figure 1(c) 21.22 11.52 5.75 3.91 2.61 1.73 1.14 0.89Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 22.20 16.43 9.67 7.16 5.02 3.17 2.25 1.39joint learning model with recon-one; Figure 2(b) 20.23 14.19 7.73 5.96 4.22 2.62 1.79 1.35joint learning model with recon-one with denoising constraints 19.32 12.25 7.44 5.39 3.58 2.37 1.49 1.56joint learning model with recon-all; Figure 2(b) 17.51 14.12 9.12 7.04 5.49 4.05 3.08 2.25joint learning model with recon-all with denoising constraints 17.07 12.50 7.86 5.48 4.05 2.97 2.02 1.98proposed-base; Figure 1(b) 20.86 11.79 6.25 4.63 2.96 1.91 1.16 0.96proposed-base with denoising constraints 19.89 11.30 6.26 4.57 3.50 2.63 1.61 1.47proposed-perturb (random); Figure 1(c) 20.02 11.94 6.12 4.32 3.13 1.81 1.28 1.08proposed-perturb (semantic); Figure 1(c) 19.78 10.53 6.03 4.00 2.70 1.76 1.14 0.9214Under review as a conference paper at ICLR 2017Table 4: Classification performance (error rate in %) on three different sets of randomly chosentraining examples (CIFAR-10).Set No.1 (# train examples per class) 10 20 50 100 200 500 1k 2k (all) 4kfeed-forward model; Figure 2(a) 73.30 69.25 62.42 55.65 47.71 34.30 27.04 21.06 17.80joint learning model with recon-one; Figure 2(b) 75.19 70.38 62.25 55.30 46.89 34.12 26.63 21.05 17.68joint learning model with recon-one with denoising constraints 73.72 68.20 61.99 55.23 46.64 36.37 29.78 25.53 21.73joint learning model with recon-all; Figure 2(b) 74.79 68.33 62.92 56.24 51.37 40.30 30.91 26.49 22.71joint learning model with recon-all with denoising constraints 76.56 69.67 64.53 57.88 52.74 42.24 36.90 30.93 27.41proposed-base; Figure 1(b) 70.79 66.57 59.91 52.98 43.29 32.25 26.19 20.92 17.45proposed-base with denoising constraints 71.03 67.49 60.37 53.52 44.28 33.40 28.00 25.06 21.34proposed-perturb (random); Figure 1(c) 71.89 67.12 59.22 52.79 43.87 31.82 25.04 20.97 17.43proposed-perturb (semantic); Figure 1(c) 71.59 66.90 58.64 52.34 42.74 30.94 24.45 20.10 16.16Set No.2 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 72.39 69.49 60.45 54.85 46.91 33.39 26.73 21.00joint learning model with recon-one; Figure 2(b) 74.06 69.14 60.71 54.54 45.70 33.54 27.43 20.90joint learning model with recon-one with denoising constraints 76.40 69.33 60.28 55.38 47.40 36.29 29.31 24.60joint learning model with recon-all; Figure 2(b) 72.28 67.60 61.53 56.65 49.99 42.08 32.99 26.33joint learning model with recon-all with denoising constraints 73.90 69.23 61.90 57.99 52.35 45.12 37.23 30.14proposed-base; Figure 1(b) 72.49 65.62 57.82 52.66 43.20 32.24 25.60 21.32proposed-base with denoising constraints 72.99 66.75 57.78 53.81 44.33 33.56 28.40 25.03proposed-perturb (random); Figure 1(c) 71.84 65.98 58.08 53.37 43.44 31.56 25.69 21.03proposed-perturb (semantic); Figure 1(c) 72.85 66.65 57.44 52.21 42.74 31.17 24.99 20.54Set No.3 (# train examples per class) 10 20 50 100 200 500 1k 2kfeed-forward model; Figure 2(a) 75.78 68.24 61.02 54.29 46.28 33.38 26.11 20.85joint learning model with recon-one; Figure 2(b) 77.79 67.62 61.37 55.22 45.96 33.21 26.29 21.81joint learning model with recon-one with denoising constraints 76.60 69.27 61.13 55.10 47.50 37.12 29.63 24.88joint learning model with recon-all; Figure 2(b) 72.92 66.97 63.31 56.23 50.16 41.41 33.75 26.31joint learning model with recon-all with denoising constraints 76.83 68.53 65.58 58.29 52.43 45.42 39.01 32.32proposed-base; Figure 1(b) 71.60 66.31 58.99 52.30 43.88 31.10 25.48 20.95proposed-base with denoising constraints 72.39 67.20 60.60 52.64 44.62 33.52 28.01 25.25proposed-perturb (random); Figure 1(c) 71.34 67.15 59.55 52.86 43.81 32.01 25.78 20.42proposed-perturb (semantic); Figure 1(c) 70.06 67.07 58.83 52.41 43.47 30.61 25.00 19.9415Under review as a conference paper at ICLR 2017(A4.1) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 7 shows reconstructed examples from perturbed (random orsemantic) latent representations (refer to Figure 5 and the analysis described in Section 4.4).Example.1 random perturbation Example.1 semantic perturbation Example.2 random perturbation Example.2 semantic perturbation Figure 7: For each example, top row is the original examples selected from the training set, andthe rest are reconstructed from the perturbed representations via random (left) and semantic (right)perturbations.16Under review as a conference paper at ICLR 2017(A4.2) Q UALITATIVE ANALYSISExtended from Section 4.4. Figure 8 shows the t-SNE results per class on MNIST. The overalltendency is similar to the description in Section 4.4.17Under review as a conference paper at ICLR 2017Figure 8: From top to bottom: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. From left to right: training exam-ples (circle), training examples (circle) + random-perturbed samples (cross), and training examples(circle) + semantic-perturbed samples (cross). Best viewed in color.18
HJe8ExgVe
SJc1hL5ee
ICLR.cc/2017/conference/-/paper324/official/review
{"title": "Lossy compression techniques applied to FastText with nice results", "rating": "6: Marginally above acceptance threshold", "review": "This paper describes how to approximate the FastText approach such that its memory footprint is reduced by several orders of magnitude, while preserving its classification accuracy. The original FastText approach was based on a linear classifier on top of bag-of-words embeddings. This type of method is extremely fast to train and test, but the model size can be quite large.\n\nThis paper focuses on approximating the original approach with lossy compression techniques. Namely, the embeddings and classifier matrices A and B are compressed with Product Quantization, and an aggressive dictionary pruning is carried out. Experiments on various datasets (either with small or large number of classes) are conducted to tune the parameters and demonstrate the effectiveness of the approach. With a negligible loss in classification accuracy, an important reduction in term of model size (memory footprint) can be achieved, in the order of 100~1000 folds compared to the original size.\n\nThe paper is well written overall. The goal is clearly defined and well carried out, as well as the experiments. Different options for compressing the model data are evaluated and compared (e.g. PQ vs LSH), which is also interesting. Nevertheless the paper does not propose by itself any novel idea for text classification. It just focuses on adapting existing lossy compression techniques, which is not necessarily a problem. Specifically, it introduces:\n - a straightforward variant of PQ for unnormalized vectors,\n - dictionary pruning is cast as a set covering problem (which is NP-hard), but a greedy approach is shown to yield excellent results nonetheless,\n - hashing tricks and bloom filter are simply borrowed from previous papers.\n\nThese techniques are quite generic and could as well be used in other works. \n\n\nHere are some minor problems with the paper:\n\n - it is not made clear how the full model size is computed. What is exactly in the model? Which proportion of the full size do the A and B matrices, the dictionary, and the rest, account for? It is hard to follow where is the size bottleneck, which also seems to depend on the target application (i.e. small or large number of test classes). It would have been nice to provide a formula to calculate the total model size as a function of all parameters (k,b for PQ and K for dictionary, number of classes).\n \n - some parts lack clarity. For instance, the greedy approach to prune the dictionary is exposed in less than 4 lines (top of page 5), though it is far from being straightforward. Likewise, it is not clear why the binary search used for the hashing trick would introduce an overhead of a few hundreds of KB.\n \n\nOverall this looks like a solid work, but with potentially limited impact research-wise.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
FastText.zip: Compressing text classification models
["Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Matthijs Douze", "Herve Jegou", "Tomas Mikolov"]
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store the word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent the quantization artifacts. As a result, our approach produces a text classifier, derived from the fastText approach, which at test time requires only a fraction of the memory compared to the original one, without noticeably sacrificing the quality in terms of classification accuracy. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
["Natural language processing", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=SJc1hL5ee
https://openreview.net/pdf?id=SJc1hL5ee
https://openreview.net/forum?id=SJc1hL5ee&noteId=HJe8ExgVe
Under review as a conference paper at ICLR 2017FASTTEXT.ZIP:COMPRESSING TEXT CLASSIFICATION MODELSArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv ́e J ́egou & Tomas MikolovFacebook AI Researchfajoulin,egrave,bojanowski,matthijs,rvj,tmikolov g@fb.comABSTRACTWe consider the problem of producing compact architectures for text classifica-tion, such that the full model fits in a limited amount of memory. After consid-ering different solutions inspired by the hashing literature, we propose a methodbuilt upon product quantization to store word embeddings. While the originaltechnique leads to a loss in accuracy, we adapt this method to circumvent quan-tization artefacts. Combined with simple approaches specifically adapted to textclassification, our approach derived from fastText requires, at test time, onlya fraction of the memory compared to the original FastText, without noticeablysacrificing quality in terms of classification accuracy. Our experiments carried outon several benchmarks show that our approach typically requires two orders ofmagnitude less memory than fastText while being only slightly inferior withrespect to accuracy. As a result, it outperforms the state of the art by a good marginin terms of the compromise between memory usage and accuracy.1 I NTRODUCTIONText classification is an important problem in Natural Language Processing (NLP). Real world use-cases include spam filtering or e-mail categorization. It is a core component in more complex sys-tems such as search and ranking. Recently, deep learning techniques based on neural networkshave achieved state of the art results in various NLP applications. One of the main successes of deeplearning is due to the effectiveness of recurrent networks for language modeling and their applicationto speech recognition and machine translation (Mikolov, 2012). However, in other cases includingseveral text classification problems, it has been shown that deep networks do not convincingly beatthe prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016).In spite of being (typically) orders of magnitude slower to train than traditional techniques basedon n-grams, neural networks are often regarded as a promising alternative due to compact modelsizes, in particular for character based models. This is important for applications that need to run onsystems with limited memory such as smartphones.This paper specifically addresses the compromise between classification accuracy and the modelsize. We extend our previous work implemented in the fastText library1. It is based on n-gramfeatures, dimensionality reduction, and a fast approximation of the softmax classifier (Joulin et al.,2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re-training, allow us to produce text classification models with tiny size, often less than 100kB whentrained on several popular datasets, without noticeably sacrificing accuracy or speed.We plan to publish the code and scripts required to reproduce our results as an extension of thefastText library, thereby providing strong reproducible baselines for text classifiers that optimizethe compromise between the model size and accuracy. We hope that this will help the engineeringcommunity to improve existing applications by using more efficient models.This paper is organized as follows. Section 2 introduces related work, Section 3 describes our textclassification model and explains how we drastically reduce the model size. Section 4 shows theeffectiveness of our approach in experiments on multiple text classification benchmarks.1https://github.com/facebookresearch/fastText1Under review as a conference paper at ICLR 20172 R ELATED WORKModels for text classification. Text classification is a problem that has its roots in many applica-tions such as web search, information retrieval and document classification (Deerwester et al., 1990;Pang & Lee, 2008). Linear classifiers often obtain state-of-the-art performance while being scal-able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). Theyare particularly interesting when associated with the right features (Wang & Manning, 2012). Theyusually require storing embeddings for words and n-grams, which makes them memory inefficient.Compression of language models. Our work is related to compression of statistical languagemodels. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti-zation. Pruning aims to keep only the most important n-grams in the model, leaving out those withprobability lower than a specified threshold. Further, the individual n-grams can be compressed byquantizing the probability value, and by storing the n-gram itself more efficiently than as a sequenceof characters. Various strategies have been developed, for example using tree structures or hashfunctions, and are discussed in (Talbot & Brants, 2008).Compression for similarity estimation and search. There is a large body of literature on howto compress a set of vectors into compact codes, such that the comparison of two codes approxi-mates a target similarity in the original space. The typical use-case of these methods considers anindexed dataset of compressed vectors, and a query for which we want to find the nearest neigh-bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar(2002), which is a binarization technique based on random projections that approximates the cosinesimilarity between two vectors through a monotonous function of the Hamming distance betweenthe two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Manysubsequent works have improved this initial binarization technique, such as spectal hashing (Weisset al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma-trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveysby Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature.Beyond these binarization strategies, more general quantization techniques derived from Jegou et al.(2011) offer better trade-offs between memory and the approximation of a distance estimator. TheProduct Quantization (PQ) method approximates the distances by calculating, in the compressed do-main, the distance between their quantized approximations. This method is statistically guaranteedto preserve the Euclidean distance between the vectors within an error bound directly related to thequantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi& Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In ourpaper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013).Softmax approximation The aforementioned works approximate either the Euclidean distanceor the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in thecontext of fastText , we are specifically interested in approximating the maximum inner productinvolved in a softmax layer. Several approaches derived from LSH have been recently proposedto achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussedby Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes,we resort a more traditional encoding by employing a magnitude/direction parametrization of ourvectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which fits theaforementioned LSH and PQ methods well.Neural network compression models. Recently, several research efforts have been conductedto compress the parameters of architectures involved in computer vision, namely for state-of-the-art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vectorquantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denilet al. (2013) show that such classification models are easily compressed because they are over-parametrized, which concurs with early observations by LeCun et al. (1990).2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma.For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinearsearch viacell probes, see for instance the E2LSH variant of Datar et al. (2004).2Under review as a conference paper at ICLR 2017Some of these works both aim at reducing the model size and the speed. In our case, since thefastText classifier on which our proposal is built upon is already very efficient, we are primilarlyinterested in reducing the size of the model while keeping a comparable classification efficiency.3 P ROPOSED APPROACH3.1 T EXT CLASSIFICATIONIn the context of text classification, linear classifiers (Joulin et al., 2016) remain competitive withmore sophisticated, deeper models, and are much faster to train. On top of standard tricks commonlyused in linear text classification (Agarwal et al., 2014; Wang & Manning, 2012; Weinberger et al.,2009), Joulin et al. (2016) use a low rank constraint to reduce the computation burden while sharinginformation between different classes. This is especially useful in the case of a large output space,where rare classes may have only a few training examples. In this paper, we focus on a similarmodel, that is, which minimizes the softmax loss `overNdocuments:NXn=1`(yn; BAx n); (1)where xnis a bag of one-hot vectors and ynthe label of the n-th document. In the case of a largevocabulary and a large output space, the matrices AandBare big and can require gigabytes ofmemory. Below, we describe how we reduce this memory usage.3.2 B OTTOM -UP PRODUCT QUANTIZATIONProduct quantization is a popular method for compressed-domain approximate nearest neighborsearch (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector byfinding the closest vector in a pre-defined structured set of centroids, referred to as a codebook.This codebook is not enumerated, since it is extremely large. Instead it is implicitly defined by itsstructure: a d-dimensional vector x2Rdis approximated as^x=kXi=1qi(x); (2)where the different subquantizers qi:x7!qi(x)are complementary in the sense that their respectivecentroids lie in distinct orthogonal subspaces, i.e.,8i6=j;8x; y;hqi(x)jqj(y)i= 0. In the originalPQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts toalleviating this constraint and to not depend on the original coordinate system. Another way to seethis is to consider that PQ splits a given vector xintoksubvectors xi,i= 1: : : k , each of dimensiond=k:x= [x1: : : xi: : : xk], and quantizes each sub-vector using a distinct k-means quantizer. Eachsubvector xiis thus mapped to the closest centroid amongst 2bcentroids, where bis the number ofbits required to store the quantization index of the subquantizer, typically b= 8. The reconstructedvector can take 2kbdistinct reproduction values, and is stored in kbbits.PQ estimates the inner product in the compressed domain asx>y^x>y=kXi=1qi(xi)>yi: (3)This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). Inpractice, the vector estimate ^xis trivially reconstructed from the codes, i.e., from the quantizationindexes, by concatenating these centroids.The two parameters involved in PQ, namely the number of subquantizers kand the number of bits bper quantization index, are typically set to k2[2; d=2], andb= 8to ensure byte-alignment.Discussion. PQ offers several interesting properties in our context of text classification. Firstly,the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen-troids for b= 8. Secondly, at test time it allows the reconstruction of the vectors with almost no3Under review as a conference paper at ICLR 2017computational and memory overhead. Thirdly, it has been successfully applied in computer vision,offering much better performance than binary codes, which makes it a natural candidate to compressrelatively shallow models. As observed by S ́anchez & Perronnin (2011), using PQ just before thelast layer incurs a very limited loss in accuracy when combined with a support vector machine.In the context of text classification, the norms of the vectors are widely spread, typically with a ratioof 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes anabsolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate thenorm and the angle of the vectors and to quantize them separately. This allows a quantization withno loss of performance, yet requires an extra bbits per vector.Bottom-up strategy: re-training. The first works aiming at compressing CNN models like theone proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without anyre-training. However, as observed in Sablayrolles et al. (2016), when using quantization methodslike PQ, it is better to re-train the layers occurring after the quantization, so that the network canre-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy:the square magnitude of vectors is reduced, on average, by the average quantization error for anyquantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details.This suggests a bottom-up learning strategy where we first quantize the input matrix, then retrainand quantize the output matrix (the input matrix being frozen). Experiments in section 4 show thatit is worth adopting this strategy.Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracybetween 0:1%and0:5%, depending on the dataset and setting; see Section 4 and the appendix.3.3 F URTHER TEXT SPECIFIC TRICKSThe memory usage strongly depends on the size of the vocabulary, which can be large in manytext classification tasks. While it is clear that a large part of the vocabulary is useless or redundant,directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequentwords, like “the” or “is” are not discriminative, in contrast to some rare words, e.g., in the context oftag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary.They lead to major memory reduction, in extreme cases by a factor 100. We experimentally showthat this drastic reduction is complementary with the PQ compression method, meaning that thecombination of both strategies reduces the model size by a factor up to 1000 for some datasets.Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overallperformance is a feature selection problem. While many approaches have been proposed to selectgroups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested inselecting a fixed subset of Kwords and ngrams from a pre-trained model. This can be achieved byselecting the Kembeddings that preserve as much of the model as possible, which can be reducedto selecting the Kwords and ngrams associated with the highest norms.While this approach offers major memory savings, it has one drawback occurring in some particularcases: some documents may not contained any of the Kbest features, leading to a significant dropin performance. It is thus important to keep the Kbest features under the condition that they coverthe whole training set. More formally, the problem is to find a subset Sin the feature setVthatmaximizes the sum of their norms wsunder the constraint that all the documents in the training setDare covered:maxSVXs2Sws s.t.jSj K; P 1S1D;where Pis a matrix such that Pds= 1 if the s-th feature is in the d-th document, and 0otherwise.This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standardgreedy approaches require the storing of an inverted index or to do multiple passes over the dataset,which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast asan instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014;4Under review as a conference paper at ICLR 20172 4 894.094.595.095.596.096.5accuracySogou2 4 8number of bytes69.570.070.571.071.572.072.5YahooFull PQ OPQ LSH, norm PQ, norm OPQ, norm2 4 862.062.462.863.263.6Yelp fullFigure 1: Accuracy as a function of the memory per vector/embedding on 3datasets from Zhanget al. (2015). Note, an extra byte is required when we encode the norm explicitly (”norm”).Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For eachdocument, we verify if it is already covered by a retained feature and, if not, we add the feature withthe highest norm to our set of retained features. If the number of features is below k, we add thefeatures with the highest norm that have not yet been picked.Hashing trick & Bloom filter. On small models, the dictionary can take a significant portion ofthe memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to bothwords and n-grams. This strategy is also used in V owpal Wabbit (Agarwal et al., 2014) in the contextof online training. This allows us to save around 1-2Mb with almost no overhead at test time (justthe cost of computing the hashing function).Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of theKremaining buckets. At test time, a binary search over the list of indices is required. It has acomplexity of O(log(K))and a memory overhead of a few hundreds of kilobytes. Using Bloomfilters instead reduces the complexity O(1)at test time and saves a few hundred kilobytes. However,in practice, it degrades performance.4 E XPERIMENTSThis section evaluates the quality of our model compression pipeline and compare it to other com-pression methods on different text classification problems, and to other compact text classifiers.Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a modelusing fastText with the default setting unless specified otherwise. That is 2M buckets, a learningrate of 0:1and10training epochs. The dimensionality dof the embeddings is set to powers of 2toavoid border effects that could make the interpretation of the results more difficult. As baselines, weuse Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al.,2013) (the non-parametric variant). Note that we use an improved version of LSH where randomorthogonal matrices are used instead of random matrix projection J ́egou et al. (2008). In a firstseries of experiments, we use the 8datasets and evaluation protocol of Zhang et al. (2015). Thesedatasets contain few million documents and have at most 10classes. We also explore the limit ofquantization on a dataset with an extremely large output space, that is a tag dataset extracted fromthe YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper.5Under review as a conference paper at ICLR 2017-2-10AG Amazon full-2-10Amazon polarity DBPedia-2-10Sogou Yahoo100kB 1MB 10MB 100MB-2-10Yelp full100kB 1MB 10MB 100MBYelp polarityFull PQ Pruned Zhang et al. (2015) Xiao & Cho (2016)Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model withdifferent level of pruning with NPQ and the full fastText model. We also compare with Zhanget al. (2015) and Xiao & Cho (2016). Note that the size is in log scale.4.1 S MALL DATASETSCompression techniques. We compare three popular methods used for similarity estimation withcompact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 showsthe accuracy as a function of the number of bytes used per embedding, which corresponds to thenumber kof subvectors in the case of PQ and OPQ. See more results in the appendix. As discussedin Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalizeddata. Therefore we only report results with normalization. Once normalized, PQ and OPQ arealmost lossless even when using only k= 4subquantizers per embedding (equivalently, bytes). Weobserve in practice that using k=d=2,i.e., half of the components of the embeddings, works well inpractice. In the rest of the paper and if not stated otherwise, we focus on this setting. The differencebetween the normalized versions of PQ and OPQ is limited and depends on the dataset. Thereforewe adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.word Entropy Norm word Entropy Norm. 1 354 mediocre 1399 1, 2 176 disappointing 454 2the 3 179 so-so 2809 3and 4 1639 lacks 1244 4i 5 2374 worthless 1757 5a 6 970 dreadful 4358 6to 7 1775 drm 6395 7it 8 1956 poorly 716 8of 9 2815 uninspired 4245 9this 10 3275 worst 402 10Table 1: Best ranked words w.r.t. entropy ( left) and norm ( right ) on the Amazon full review dataset.We give the rank for both criteria. The norm ranking filters out words carrying little information.3Data available at https://research.facebook.com/research/fasttext/6Under review as a conference paper at ICLR 2017Dataset full 64KiB 32KiB 16KiBAG 65M 92.1 91.4 90.6 89.1Amazon full 108M 60.0 58.8 56.0 52.9Amazon pol. 113M 94.5 93.3 92.1 89.3DBPedia 87M 98.4 98.2 98.1 97.4Sogou 73M 96.4 96.4 96.3 95.5Yahoo 122M 72.1 70.0 69.0 69.2Yelp full 78M 63.8 63.2 62.4 58.7Yelp pol. 77M 95.7 95.3 94.9 93.2Average diff. [ %] 0 -0.8 -1.7 -3.5Table 2: Performance on very small models. We use a quantization with k= 1, hashing and anextreme pruning. The last row shows the average drop of performance for different size.Pruning. Figure 2 shows the performance of our model with different sizes. We fix k=d=2anduse different pruning thresholds. NPQ offers a compression rate of 10compared to the full model.As the pruning becomes more agressive, the overall compression can increase up up to 1;000with little drop of performance and no additional overhead at test time. In fact, using a smallerdictionary makes the model faster at test time. We also compare with character-level ConvolutionalNeural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models fortext classification because they achieve similar performance with less memory usage than linearmodels (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory,NPQ is already on par with CNNs’ memory usage. Note that CNNs are not quantized, and it wouldbe worth seeing how much they can be quantized with no drop of performance. Such a study isbeyond the scope of this paper. Our pruning is based on the norm of the embeddings accordingto the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the rankingobtained using entropy, which is commonly used in unsupervised settings Stolcke (2000).Extreme compression. Finally, in Table 2, we explore the limit of quantized model by lookingat the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, thedrop of performance is only around 0:8%and1:7%despite a compression rate of 1;0004;000.4.2 L ARGE DATASET : FLICKR TAGIn this section, we explore the limit of compression algorithms on very large datasets. Similarto Joulin et al. (2016), we consider a hashtag prediction dataset containing 312;116labels. We setthe minimum count for words at 10, leading to a dictionary of 1;427;667words. We take 10Mbuckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.Output encoding. We are interested in understanding how the performance degrades if the classi-fier is also quantized ( i.e., the matrix Bin Eq. 1) and when the pruning is at the limit of the minimumnumber of features required to cover the full dataset.Model k norm retrain Acc. Sizefull (uncompressed) 45.4 12 GiBInput 128 45.0 1.7 GiBInput 128 x 45.3 1.8 GiBInput 128 x x 45.5 1.8 GiBInput+Output 128 x 45.2 1.5 GiBInput+Output 128 x x 45.4 1.5 GiBTable 3: FlickrTag: Influence of quantizing the output matrix on performance. We use PQ forquantization with an optional normalization. We also retrain the output matrix after quantizing theinput one. The ”norm” refers to the separate encoding of the magnitude and angle, while ”retrain”refers to the re-training bottom-up PQ method described in Section 3.2.7Under review as a conference paper at ICLR 2017Table 3 shows that quantizing both the “input” matrix ( i.e.,Ain Eq. 1) and the “output” matrix ( i.e.,B) does not degrade the performance compared to the full model. We use embeddings with d= 256dimensions and use k=d=2subquantizers. We do not use any text specific tricks, which leads toa compression factor of 8. Note that even if the output matrix is not retrained over the embeddings,the performance is only 0:2%away from the full model. As shown in the Appendix, using lesssubquantizers significantly decreases the performance for a small memory gain.Model full Entropy pruning Norm pruning Max-Cover pruning#embeddings 11.5M 2M 1M 2M 1M 2M 1MMemory 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiBCoverage [ %] 88.4 70.5 70.5 73.2 61.9 88.4 88.4Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods.We show the coverage of the test set for each method.Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on topof a fully quantized model. The full model misses 11:6%of the test set because of missing words(some documents are either only composed of hashtags or have only rare words). There are 312;116labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruningwith1M features misses about 3040% of the test set, leading to a significant drop of performance.On the other hand, even though the max-coverage pruning approach was set on the train set, it doesnot suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If thepruning is too aggressive, however, the coverage decreases significantly.5 F UTURE WORKIt may be possible to obtain further reduction of the model size in the future. One idea is to conditionthe size of the vectors (both for the input features and the labels) based on their frequency (Chenet al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labelsby full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vectorsize on the frequency and norm seems like an interesting direction to explore in the future.We may also consider combining the entropy and norm pruning criteria: instead of keeping thefeatures in the model based just on the frequency or the norm, we can use both to keep a good set offeatures. This could help to keep features that are both frequent and discriminative, and thereby toreduce the coverage problem that we have observed.Additionally, instead of pruning out the less useful features, we can decompose them into smallerunits (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminativeword into a sequence of character trigrams. This could help in cases where training and test examplesare very short (for example just a single word).6 C ONCLUSIONIn this paper, we have presented several simple techniques to reduce, by several orders of magnitude,the memory complexity of certain text classifiers without sacrificing accuracy nor speed. This isachieved by applying discriminative pruning which aims to keep only important features in thetrained model, and by performing quantization of the weight matrices and hashing of the dictionary.We will publish the code as an extension of the fastText library. We hope that our work willserve as a baseline to the research community, where there is an increasing interest for comparingthe performance of various deep learning text classifiers for a given number of parameters. Overall,compared to recent work based on convolutional neural networks, fastText.zip is often moreaccurate, while requiring several orders of magnitude less time to train on common CPUs, andincurring a fraction of the memory complexity.8Under review as a conference paper at ICLR 2017REFERENCESAlekh Agarwal, Olivier Chapelle, Miroslav Dud ́ık, and John Langford. A reliable effective terascalelinear learning system. Journal of Machine Learning Research , 15(1):1111–1133, 2014.Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization withsparsity-inducing penalties. Foundations and Trends Rin Machine Learning , 4(1):1–106, 2012.Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream-ing submodular maximization: Massive data summarization on the fly. In SIGKDD , pp. 671–680.ACM, 2014.Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub-modular secretary problem and extensions. In Approximation, Randomization, and CombinatorialOptimization. Algorithms and Techniques , pp. 39–52. Springer, 2010.Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC , pp. 380–388, May 2002.Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neurallanguage models. arXiv preprint arXiv:1512.04906 , 2015.Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In InternationalConference on World Wide Web , 2010.Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarizedneural networks: Training neural networks with weights and activations constrained to +1 or -1.arXiv preprint arXiv:1602.02830 , 2016.M. Datar, N. Immorlica, P. Indyk, and V .S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Symposium on Computational Geometry , pp. 253–262,2004.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science ,1990.Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas.Predicting parameters in deep learning. In NIPS , pp. 2148–2156, 2013.Uriel Feige. A threshold of ln n for approximating set cover. JACM , 45(4):634–652, 1998.Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximatenearest neighbor search. In CVPR , June 2013.Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learningbinary codes. In CVPR , June 2011.Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net-works using vector quantization. arXiv preprint arXiv:1412.6115 , 2014.Edouard Grave, Armand Joulin, Moustapha Ciss ́e, David Grangier, and Herv ́e J ́egou. Efficientsoftmax approximation for gpus. arXiv preprint arXiv:1609.04309 , 2016.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. In ICLR , 2016.Herv ́e J ́egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometricconsistency for large scale image search. In ECCV , October 2008.Herv ́e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighborsearch. IEEE Trans. PAMI , January 2011.Thorsten Joachims. Text categorization with support vector machines: Learning with many relevantfeatures . Springer, 1998.9Under review as a conference paper at ICLR 2017Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficienttext classification. arXiv preprint arXiv:1607.01759 , 2016.Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS , 2:598–605, 1990.Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks withfew multiplications. arXiv preprint arXiv:1510.03009 , 2015.Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classifi-cation. In AAAI workshop on learning for text categorization , 1998.Lukas Meier, Sara Van De Geer, and Peter B ̈uhlmann. The group lasso for logistic regression.Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 70(1):53–71, 2008.Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis . VUT Brno,2012.Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky.Subword language modeling with neural networks. preprint , 2012.Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search.InICML , pp. 1926–1934, 2015.Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR , June 2013.Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor-mation retrieval , 2008.Alexandre Sablayrolles, Matthijs Douze, Herv ́e J ́egou, and Nicolas Usunier. How should we evalu-ate supervised hashing? arXiv preprint arXiv:1609.06753 , 2016.Jorge S ́anchez and Florent Perronnin. High-dimensional signature compression for large-scale im-age classification. In CVPR , 2011.Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner productsearch. In NIPS , pp. 2321–2329, 2014.Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025 ,2000.David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. InACL, 2008.Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland,Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica-tions of the ACM , 2016.Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: Asurvey. arXiv preprint arXiv:1408.2927 , 2014.Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - Asurvey. CoRR , abs/1509.05472, 2015.Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In ACL, 2012.Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Featurehashing for large scale multitask learning. In ICML , 2009.Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS , December 2009.Yijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combiningconvolution and recurrent layers. arXiv preprint arXiv:1602.00367 , 2016.Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas-sification. In NIPS , 2015.10Under review as a conference paper at ICLR 2017APPENDIXIn the appendix, we show some additional results. The model used in these experiments only had1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8differentdatasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8show a thorough comparison of the hashing trick and the Bloom filters.Quant. k norm AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 72 120M 63.7 56M 95.7 53Mfull,nodict 92.1 34M 59.9 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.6 48M 95.6 46MLSH 8 88.7 8.5M 51.3 20M 90.3 21M 92.7 14M 94.2 11M 54.8 23M 56.7 12M 92.2 12MPQ 8 91.7 8.5M 59.3 20M 94.4 21M 97.4 14M 96.1 11M 71.3 23M 62.8 12M 95.4 12MOPQ 8 91.9 8.5M 59.3 20M 94.4 21M 96.9 14M 95.8 11M 71.4 23M 62.5 12M 95.4 12MLSH 8 x 91.9 9.5M 59.4 22M 94.5 24M 97.8 16M 96.2 12M 71.6 26M 63.4 14M 95.6 13MPQ 8 x 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13MOPQ 8 x 92.1 9.5M 59.9 22M 94.5 24M 98.4 16M 96.3 12M 72.2 26M 63.6 14M 95.6 13MLSH 4 88.3 4.3M 50.5 9.7M 88.9 11M 91.6 7.0M 94.3 5.3M 54.6 12M 56.5 6.0M 92.9 5.7MPQ 4 91.6 4.3M 59.2 9.7M 94.4 11M 96.3 7.0M 96.1 5.3M 71.0 12M 62.2 6.0M 95.4 5.7MOPQ 4 91.7 4.3M 59.0 9.7M 94.4 11M 96.9 7.0M 95.6 5.3M 71.2 12M 62.6 6.0M 95.4 5.7MLSH 4 x 92.1 5.3M 59.2 13M 94.4 13M 97.7 8.8M 96.2 6.6M 71.1 15M 63.1 7.4M 95.5 7.2MPQ 4 x 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72.0 15M 63.6 7.5M 95.6 7.2MOPQ 4 x 92.2 5.3M 59.8 13M 94.5 13M 98.3 8.8M 96.3 6.6M 72.1 15M 63.7 7.5M 95.6 7.2MLSH 2 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9MPQ 2 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9MOPQ 2 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9MLSH 2 x 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3MPQ 2 x 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3MOPQ 2 x 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3MTable 5: Comparison between standard quantization methods. The original model has a dimension-ality of 8and2M buckets. Note that all of the methods are without dictionary.k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full, nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M8 full 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M4 full 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M2 full 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M8 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M8 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K8 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M4 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K4 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K4 10K 91.5 98K 58.6 98K 93.2 98K 98.5 98K 96.5 98K 71.2 98K 63.3 98K 95.4 98K2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M2 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K2 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K2 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KTable 6: Comparison with different quantization and level of pruning. “co” is the cut-off parameterof the pruning.11Under review as a conference paper at ICLR 2017Dataset Zhang et al. (2015) Xiao & Cho (2016) fastText +PQ,k=d=2AG 90.2 108M 91.4 80M 91.9 889KAmz. f. 59.5 10.8M 59.2 1.6M 59.6 449KAmz. p. 94.5 10.8M 94.1 1.6M 94.3 449KDBP 98.3 108M 98.6 1.2M 98.5 98KSogou 95.1 108M 95.2 1.6M 96.5 98KYah. 70.5 108M 71.4 80M 71.7 889KYelp f. 61.6 108M 61.8 1.4M 63.3 98KYelp p. 94.8 108M 94.5 1.2M 95.5 449KTable 7: Comparison between CNNs and fastText with and without quantization. The numbersfor Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we reportthe size of the model under the assumption that they use float32 storage. For fastText (+PQ) wereport the memory used in RAM at test time.Quant. Bloom co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full,nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46MNPQ 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4MNPQ x 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830KNPQ 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693KNPQ x 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420KNPQ 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352KNPQ x 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215KNPQ 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KNPQ x 10K 90.8 51K 56.8 51K 91.7 51K 98.1 51K 96.1 51K 68.7 51K 61.7 51K 94.5 51KTable 8: Comparison with and without Bloom filters. For NPQ, we set d= 8andk= 2.12Under review as a conference paper at ICLR 2017Model k norm retrain Acc. Sizefull 45.4 12GInput 128 45.0 1.7GInput 128 x 45.3 1.8GInput 128 x x 45.5 1.8GInput+Output 128 x 45.2 1.5GInput+Output 128 x x 45.4 1.5GInput+Output, co=2M 128 x x 45.5 305MInput+Output, n co=1M 128 x x 43.9 179MInput 64 44.0 1.1GInput 64 x 44.7 1.1GInput 64 x 44.9 1.1GInput+Output 64 x 44.6 784MInput+Output 64 x x 44.8 784MInput+Output, co=2M 64 x 42.5 183MInput+Output, co=1M 64 x 39.9 118MInput+Output, co=2M 64 x x 45.0 183MInput+Output, co=1M 64 x x 43.4 118MInput 32 40.5 690MInput 32 x 42.4 701MInput 32 x x 42.9 701MInput+Output 32 x 42.3 435MInput+Output 32 x x 42.8 435MInput+Output, co=2M 32 x 35.0 122MInput+Output, co=1M 32 x 32.6 88MInput+Output, co=2M 32 x x 43.3 122MInput+Output, co=1M 32 x x 41.6 88MTable 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param-eters, (ii) with or without re-training.13
B1Mcr6c4l
SJc1hL5ee
ICLR.cc/2017/conference/-/paper324/official/review
{"title": "Review", "rating": "5: Marginally below acceptance threshold", "review": "The paper presents a few tricks to compress a wide and shallow text classification model based on n-gram features. These tricks include (1) using (optimized) product quantization to compress embedding weights (2) pruning some of the vocabulary elements (3) hashing to reduce the storage of the vocabulary (this is a minor component of the paper). The paper focuses on models with very large vocabularies and shows a reduction in the size of the models at a relatively minor reduction of the accuracy.\n\nThe problem of compressing neural models is important and interesting. The methods section of the paper is well written with good high level comments and references. However, the machine learning contributions of the paper are marginal to me. The experiments are not too convincing mainly focusing on benchmarks that are not commonly used. The implications of the paper on the state-of-the-art RNN text classification models is unclear.\n\nThe use of (optimized) product quantization for approximating inner product is not particularly novel. Previous work also considered doing this. Most of the reduction in the model sizes comes from pruning vocabulary elements. The method proposed for pruning vocabulary elements is simply based on the assumption that embeddings with larger L2 norm are more important. A coverage heuristic is taken into account too. From a machine learning point of view, the proper baseline to solve this problem is to have a set of (relaxed) binary coefficients for each embedding vector and learn the coefficients jointly with the weights. An L1 regularizer on the coefficients can be used to encourage sparsity. From a practical point of view, I believe an important baseline is missing: what if one simply uses fewer vocabulary elements (e.g based on subword units - see https://arxiv.org/pdf/1508.07909.pdf) and retrain a smaller models?\n\nGiven the lack of novelty and the missing baselines, I believe the paper in its current form is not ready for publication at ICLR.\n\nMore comments:\n- The title does not make it clear that the paper focuses on wide and shallow text classification models. Please revise the title.\n- The paper cites an ArXiv manuscript by Carreira-Perpinan and Alizadeh (2016) several times, which has the same title as the submitted paper. Please make the paper self-contained and include any supplementary material in the appendix.\n- In Fig 2 does the square mark PQ or OPQ? The paper does not distinguish OPQ and PQ properly at multiple places especially in the experiments.\n- The paper argues the wide and shallow models are the state of the art in small datasets. Is this really correct? What about transfer learning?\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
FastText.zip: Compressing text classification models
["Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Matthijs Douze", "Herve Jegou", "Tomas Mikolov"]
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store the word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent the quantization artifacts. As a result, our approach produces a text classifier, derived from the fastText approach, which at test time requires only a fraction of the memory compared to the original one, without noticeably sacrificing the quality in terms of classification accuracy. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
["Natural language processing", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=SJc1hL5ee
https://openreview.net/pdf?id=SJc1hL5ee
https://openreview.net/forum?id=SJc1hL5ee&noteId=B1Mcr6c4l
Under review as a conference paper at ICLR 2017FASTTEXT.ZIP:COMPRESSING TEXT CLASSIFICATION MODELSArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv ́e J ́egou & Tomas MikolovFacebook AI Researchfajoulin,egrave,bojanowski,matthijs,rvj,tmikolov g@fb.comABSTRACTWe consider the problem of producing compact architectures for text classifica-tion, such that the full model fits in a limited amount of memory. After consid-ering different solutions inspired by the hashing literature, we propose a methodbuilt upon product quantization to store word embeddings. While the originaltechnique leads to a loss in accuracy, we adapt this method to circumvent quan-tization artefacts. Combined with simple approaches specifically adapted to textclassification, our approach derived from fastText requires, at test time, onlya fraction of the memory compared to the original FastText, without noticeablysacrificing quality in terms of classification accuracy. Our experiments carried outon several benchmarks show that our approach typically requires two orders ofmagnitude less memory than fastText while being only slightly inferior withrespect to accuracy. As a result, it outperforms the state of the art by a good marginin terms of the compromise between memory usage and accuracy.1 I NTRODUCTIONText classification is an important problem in Natural Language Processing (NLP). Real world use-cases include spam filtering or e-mail categorization. It is a core component in more complex sys-tems such as search and ranking. Recently, deep learning techniques based on neural networkshave achieved state of the art results in various NLP applications. One of the main successes of deeplearning is due to the effectiveness of recurrent networks for language modeling and their applicationto speech recognition and machine translation (Mikolov, 2012). However, in other cases includingseveral text classification problems, it has been shown that deep networks do not convincingly beatthe prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016).In spite of being (typically) orders of magnitude slower to train than traditional techniques basedon n-grams, neural networks are often regarded as a promising alternative due to compact modelsizes, in particular for character based models. This is important for applications that need to run onsystems with limited memory such as smartphones.This paper specifically addresses the compromise between classification accuracy and the modelsize. We extend our previous work implemented in the fastText library1. It is based on n-gramfeatures, dimensionality reduction, and a fast approximation of the softmax classifier (Joulin et al.,2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re-training, allow us to produce text classification models with tiny size, often less than 100kB whentrained on several popular datasets, without noticeably sacrificing accuracy or speed.We plan to publish the code and scripts required to reproduce our results as an extension of thefastText library, thereby providing strong reproducible baselines for text classifiers that optimizethe compromise between the model size and accuracy. We hope that this will help the engineeringcommunity to improve existing applications by using more efficient models.This paper is organized as follows. Section 2 introduces related work, Section 3 describes our textclassification model and explains how we drastically reduce the model size. Section 4 shows theeffectiveness of our approach in experiments on multiple text classification benchmarks.1https://github.com/facebookresearch/fastText1Under review as a conference paper at ICLR 20172 R ELATED WORKModels for text classification. Text classification is a problem that has its roots in many applica-tions such as web search, information retrieval and document classification (Deerwester et al., 1990;Pang & Lee, 2008). Linear classifiers often obtain state-of-the-art performance while being scal-able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). Theyare particularly interesting when associated with the right features (Wang & Manning, 2012). Theyusually require storing embeddings for words and n-grams, which makes them memory inefficient.Compression of language models. Our work is related to compression of statistical languagemodels. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti-zation. Pruning aims to keep only the most important n-grams in the model, leaving out those withprobability lower than a specified threshold. Further, the individual n-grams can be compressed byquantizing the probability value, and by storing the n-gram itself more efficiently than as a sequenceof characters. Various strategies have been developed, for example using tree structures or hashfunctions, and are discussed in (Talbot & Brants, 2008).Compression for similarity estimation and search. There is a large body of literature on howto compress a set of vectors into compact codes, such that the comparison of two codes approxi-mates a target similarity in the original space. The typical use-case of these methods considers anindexed dataset of compressed vectors, and a query for which we want to find the nearest neigh-bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar(2002), which is a binarization technique based on random projections that approximates the cosinesimilarity between two vectors through a monotonous function of the Hamming distance betweenthe two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Manysubsequent works have improved this initial binarization technique, such as spectal hashing (Weisset al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma-trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveysby Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature.Beyond these binarization strategies, more general quantization techniques derived from Jegou et al.(2011) offer better trade-offs between memory and the approximation of a distance estimator. TheProduct Quantization (PQ) method approximates the distances by calculating, in the compressed do-main, the distance between their quantized approximations. This method is statistically guaranteedto preserve the Euclidean distance between the vectors within an error bound directly related to thequantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi& Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In ourpaper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013).Softmax approximation The aforementioned works approximate either the Euclidean distanceor the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in thecontext of fastText , we are specifically interested in approximating the maximum inner productinvolved in a softmax layer. Several approaches derived from LSH have been recently proposedto achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussedby Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes,we resort a more traditional encoding by employing a magnitude/direction parametrization of ourvectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which fits theaforementioned LSH and PQ methods well.Neural network compression models. Recently, several research efforts have been conductedto compress the parameters of architectures involved in computer vision, namely for state-of-the-art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vectorquantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denilet al. (2013) show that such classification models are easily compressed because they are over-parametrized, which concurs with early observations by LeCun et al. (1990).2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma.For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinearsearch viacell probes, see for instance the E2LSH variant of Datar et al. (2004).2Under review as a conference paper at ICLR 2017Some of these works both aim at reducing the model size and the speed. In our case, since thefastText classifier on which our proposal is built upon is already very efficient, we are primilarlyinterested in reducing the size of the model while keeping a comparable classification efficiency.3 P ROPOSED APPROACH3.1 T EXT CLASSIFICATIONIn the context of text classification, linear classifiers (Joulin et al., 2016) remain competitive withmore sophisticated, deeper models, and are much faster to train. On top of standard tricks commonlyused in linear text classification (Agarwal et al., 2014; Wang & Manning, 2012; Weinberger et al.,2009), Joulin et al. (2016) use a low rank constraint to reduce the computation burden while sharinginformation between different classes. This is especially useful in the case of a large output space,where rare classes may have only a few training examples. In this paper, we focus on a similarmodel, that is, which minimizes the softmax loss `overNdocuments:NXn=1`(yn; BAx n); (1)where xnis a bag of one-hot vectors and ynthe label of the n-th document. In the case of a largevocabulary and a large output space, the matrices AandBare big and can require gigabytes ofmemory. Below, we describe how we reduce this memory usage.3.2 B OTTOM -UP PRODUCT QUANTIZATIONProduct quantization is a popular method for compressed-domain approximate nearest neighborsearch (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector byfinding the closest vector in a pre-defined structured set of centroids, referred to as a codebook.This codebook is not enumerated, since it is extremely large. Instead it is implicitly defined by itsstructure: a d-dimensional vector x2Rdis approximated as^x=kXi=1qi(x); (2)where the different subquantizers qi:x7!qi(x)are complementary in the sense that their respectivecentroids lie in distinct orthogonal subspaces, i.e.,8i6=j;8x; y;hqi(x)jqj(y)i= 0. In the originalPQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts toalleviating this constraint and to not depend on the original coordinate system. Another way to seethis is to consider that PQ splits a given vector xintoksubvectors xi,i= 1: : : k , each of dimensiond=k:x= [x1: : : xi: : : xk], and quantizes each sub-vector using a distinct k-means quantizer. Eachsubvector xiis thus mapped to the closest centroid amongst 2bcentroids, where bis the number ofbits required to store the quantization index of the subquantizer, typically b= 8. The reconstructedvector can take 2kbdistinct reproduction values, and is stored in kbbits.PQ estimates the inner product in the compressed domain asx>y^x>y=kXi=1qi(xi)>yi: (3)This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). Inpractice, the vector estimate ^xis trivially reconstructed from the codes, i.e., from the quantizationindexes, by concatenating these centroids.The two parameters involved in PQ, namely the number of subquantizers kand the number of bits bper quantization index, are typically set to k2[2; d=2], andb= 8to ensure byte-alignment.Discussion. PQ offers several interesting properties in our context of text classification. Firstly,the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen-troids for b= 8. Secondly, at test time it allows the reconstruction of the vectors with almost no3Under review as a conference paper at ICLR 2017computational and memory overhead. Thirdly, it has been successfully applied in computer vision,offering much better performance than binary codes, which makes it a natural candidate to compressrelatively shallow models. As observed by S ́anchez & Perronnin (2011), using PQ just before thelast layer incurs a very limited loss in accuracy when combined with a support vector machine.In the context of text classification, the norms of the vectors are widely spread, typically with a ratioof 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes anabsolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate thenorm and the angle of the vectors and to quantize them separately. This allows a quantization withno loss of performance, yet requires an extra bbits per vector.Bottom-up strategy: re-training. The first works aiming at compressing CNN models like theone proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without anyre-training. However, as observed in Sablayrolles et al. (2016), when using quantization methodslike PQ, it is better to re-train the layers occurring after the quantization, so that the network canre-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy:the square magnitude of vectors is reduced, on average, by the average quantization error for anyquantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details.This suggests a bottom-up learning strategy where we first quantize the input matrix, then retrainand quantize the output matrix (the input matrix being frozen). Experiments in section 4 show thatit is worth adopting this strategy.Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracybetween 0:1%and0:5%, depending on the dataset and setting; see Section 4 and the appendix.3.3 F URTHER TEXT SPECIFIC TRICKSThe memory usage strongly depends on the size of the vocabulary, which can be large in manytext classification tasks. While it is clear that a large part of the vocabulary is useless or redundant,directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequentwords, like “the” or “is” are not discriminative, in contrast to some rare words, e.g., in the context oftag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary.They lead to major memory reduction, in extreme cases by a factor 100. We experimentally showthat this drastic reduction is complementary with the PQ compression method, meaning that thecombination of both strategies reduces the model size by a factor up to 1000 for some datasets.Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overallperformance is a feature selection problem. While many approaches have been proposed to selectgroups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested inselecting a fixed subset of Kwords and ngrams from a pre-trained model. This can be achieved byselecting the Kembeddings that preserve as much of the model as possible, which can be reducedto selecting the Kwords and ngrams associated with the highest norms.While this approach offers major memory savings, it has one drawback occurring in some particularcases: some documents may not contained any of the Kbest features, leading to a significant dropin performance. It is thus important to keep the Kbest features under the condition that they coverthe whole training set. More formally, the problem is to find a subset Sin the feature setVthatmaximizes the sum of their norms wsunder the constraint that all the documents in the training setDare covered:maxSVXs2Sws s.t.jSj K; P 1S1D;where Pis a matrix such that Pds= 1 if the s-th feature is in the d-th document, and 0otherwise.This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standardgreedy approaches require the storing of an inverted index or to do multiple passes over the dataset,which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast asan instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014;4Under review as a conference paper at ICLR 20172 4 894.094.595.095.596.096.5accuracySogou2 4 8number of bytes69.570.070.571.071.572.072.5YahooFull PQ OPQ LSH, norm PQ, norm OPQ, norm2 4 862.062.462.863.263.6Yelp fullFigure 1: Accuracy as a function of the memory per vector/embedding on 3datasets from Zhanget al. (2015). Note, an extra byte is required when we encode the norm explicitly (”norm”).Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For eachdocument, we verify if it is already covered by a retained feature and, if not, we add the feature withthe highest norm to our set of retained features. If the number of features is below k, we add thefeatures with the highest norm that have not yet been picked.Hashing trick & Bloom filter. On small models, the dictionary can take a significant portion ofthe memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to bothwords and n-grams. This strategy is also used in V owpal Wabbit (Agarwal et al., 2014) in the contextof online training. This allows us to save around 1-2Mb with almost no overhead at test time (justthe cost of computing the hashing function).Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of theKremaining buckets. At test time, a binary search over the list of indices is required. It has acomplexity of O(log(K))and a memory overhead of a few hundreds of kilobytes. Using Bloomfilters instead reduces the complexity O(1)at test time and saves a few hundred kilobytes. However,in practice, it degrades performance.4 E XPERIMENTSThis section evaluates the quality of our model compression pipeline and compare it to other com-pression methods on different text classification problems, and to other compact text classifiers.Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a modelusing fastText with the default setting unless specified otherwise. That is 2M buckets, a learningrate of 0:1and10training epochs. The dimensionality dof the embeddings is set to powers of 2toavoid border effects that could make the interpretation of the results more difficult. As baselines, weuse Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al.,2013) (the non-parametric variant). Note that we use an improved version of LSH where randomorthogonal matrices are used instead of random matrix projection J ́egou et al. (2008). In a firstseries of experiments, we use the 8datasets and evaluation protocol of Zhang et al. (2015). Thesedatasets contain few million documents and have at most 10classes. We also explore the limit ofquantization on a dataset with an extremely large output space, that is a tag dataset extracted fromthe YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper.5Under review as a conference paper at ICLR 2017-2-10AG Amazon full-2-10Amazon polarity DBPedia-2-10Sogou Yahoo100kB 1MB 10MB 100MB-2-10Yelp full100kB 1MB 10MB 100MBYelp polarityFull PQ Pruned Zhang et al. (2015) Xiao & Cho (2016)Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model withdifferent level of pruning with NPQ and the full fastText model. We also compare with Zhanget al. (2015) and Xiao & Cho (2016). Note that the size is in log scale.4.1 S MALL DATASETSCompression techniques. We compare three popular methods used for similarity estimation withcompact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 showsthe accuracy as a function of the number of bytes used per embedding, which corresponds to thenumber kof subvectors in the case of PQ and OPQ. See more results in the appendix. As discussedin Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalizeddata. Therefore we only report results with normalization. Once normalized, PQ and OPQ arealmost lossless even when using only k= 4subquantizers per embedding (equivalently, bytes). Weobserve in practice that using k=d=2,i.e., half of the components of the embeddings, works well inpractice. In the rest of the paper and if not stated otherwise, we focus on this setting. The differencebetween the normalized versions of PQ and OPQ is limited and depends on the dataset. Thereforewe adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.word Entropy Norm word Entropy Norm. 1 354 mediocre 1399 1, 2 176 disappointing 454 2the 3 179 so-so 2809 3and 4 1639 lacks 1244 4i 5 2374 worthless 1757 5a 6 970 dreadful 4358 6to 7 1775 drm 6395 7it 8 1956 poorly 716 8of 9 2815 uninspired 4245 9this 10 3275 worst 402 10Table 1: Best ranked words w.r.t. entropy ( left) and norm ( right ) on the Amazon full review dataset.We give the rank for both criteria. The norm ranking filters out words carrying little information.3Data available at https://research.facebook.com/research/fasttext/6Under review as a conference paper at ICLR 2017Dataset full 64KiB 32KiB 16KiBAG 65M 92.1 91.4 90.6 89.1Amazon full 108M 60.0 58.8 56.0 52.9Amazon pol. 113M 94.5 93.3 92.1 89.3DBPedia 87M 98.4 98.2 98.1 97.4Sogou 73M 96.4 96.4 96.3 95.5Yahoo 122M 72.1 70.0 69.0 69.2Yelp full 78M 63.8 63.2 62.4 58.7Yelp pol. 77M 95.7 95.3 94.9 93.2Average diff. [ %] 0 -0.8 -1.7 -3.5Table 2: Performance on very small models. We use a quantization with k= 1, hashing and anextreme pruning. The last row shows the average drop of performance for different size.Pruning. Figure 2 shows the performance of our model with different sizes. We fix k=d=2anduse different pruning thresholds. NPQ offers a compression rate of 10compared to the full model.As the pruning becomes more agressive, the overall compression can increase up up to 1;000with little drop of performance and no additional overhead at test time. In fact, using a smallerdictionary makes the model faster at test time. We also compare with character-level ConvolutionalNeural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models fortext classification because they achieve similar performance with less memory usage than linearmodels (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory,NPQ is already on par with CNNs’ memory usage. Note that CNNs are not quantized, and it wouldbe worth seeing how much they can be quantized with no drop of performance. Such a study isbeyond the scope of this paper. Our pruning is based on the norm of the embeddings accordingto the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the rankingobtained using entropy, which is commonly used in unsupervised settings Stolcke (2000).Extreme compression. Finally, in Table 2, we explore the limit of quantized model by lookingat the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, thedrop of performance is only around 0:8%and1:7%despite a compression rate of 1;0004;000.4.2 L ARGE DATASET : FLICKR TAGIn this section, we explore the limit of compression algorithms on very large datasets. Similarto Joulin et al. (2016), we consider a hashtag prediction dataset containing 312;116labels. We setthe minimum count for words at 10, leading to a dictionary of 1;427;667words. We take 10Mbuckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.Output encoding. We are interested in understanding how the performance degrades if the classi-fier is also quantized ( i.e., the matrix Bin Eq. 1) and when the pruning is at the limit of the minimumnumber of features required to cover the full dataset.Model k norm retrain Acc. Sizefull (uncompressed) 45.4 12 GiBInput 128 45.0 1.7 GiBInput 128 x 45.3 1.8 GiBInput 128 x x 45.5 1.8 GiBInput+Output 128 x 45.2 1.5 GiBInput+Output 128 x x 45.4 1.5 GiBTable 3: FlickrTag: Influence of quantizing the output matrix on performance. We use PQ forquantization with an optional normalization. We also retrain the output matrix after quantizing theinput one. The ”norm” refers to the separate encoding of the magnitude and angle, while ”retrain”refers to the re-training bottom-up PQ method described in Section 3.2.7Under review as a conference paper at ICLR 2017Table 3 shows that quantizing both the “input” matrix ( i.e.,Ain Eq. 1) and the “output” matrix ( i.e.,B) does not degrade the performance compared to the full model. We use embeddings with d= 256dimensions and use k=d=2subquantizers. We do not use any text specific tricks, which leads toa compression factor of 8. Note that even if the output matrix is not retrained over the embeddings,the performance is only 0:2%away from the full model. As shown in the Appendix, using lesssubquantizers significantly decreases the performance for a small memory gain.Model full Entropy pruning Norm pruning Max-Cover pruning#embeddings 11.5M 2M 1M 2M 1M 2M 1MMemory 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiBCoverage [ %] 88.4 70.5 70.5 73.2 61.9 88.4 88.4Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods.We show the coverage of the test set for each method.Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on topof a fully quantized model. The full model misses 11:6%of the test set because of missing words(some documents are either only composed of hashtags or have only rare words). There are 312;116labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruningwith1M features misses about 3040% of the test set, leading to a significant drop of performance.On the other hand, even though the max-coverage pruning approach was set on the train set, it doesnot suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If thepruning is too aggressive, however, the coverage decreases significantly.5 F UTURE WORKIt may be possible to obtain further reduction of the model size in the future. One idea is to conditionthe size of the vectors (both for the input features and the labels) based on their frequency (Chenet al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labelsby full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vectorsize on the frequency and norm seems like an interesting direction to explore in the future.We may also consider combining the entropy and norm pruning criteria: instead of keeping thefeatures in the model based just on the frequency or the norm, we can use both to keep a good set offeatures. This could help to keep features that are both frequent and discriminative, and thereby toreduce the coverage problem that we have observed.Additionally, instead of pruning out the less useful features, we can decompose them into smallerunits (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminativeword into a sequence of character trigrams. This could help in cases where training and test examplesare very short (for example just a single word).6 C ONCLUSIONIn this paper, we have presented several simple techniques to reduce, by several orders of magnitude,the memory complexity of certain text classifiers without sacrificing accuracy nor speed. This isachieved by applying discriminative pruning which aims to keep only important features in thetrained model, and by performing quantization of the weight matrices and hashing of the dictionary.We will publish the code as an extension of the fastText library. We hope that our work willserve as a baseline to the research community, where there is an increasing interest for comparingthe performance of various deep learning text classifiers for a given number of parameters. Overall,compared to recent work based on convolutional neural networks, fastText.zip is often moreaccurate, while requiring several orders of magnitude less time to train on common CPUs, andincurring a fraction of the memory complexity.8Under review as a conference paper at ICLR 2017REFERENCESAlekh Agarwal, Olivier Chapelle, Miroslav Dud ́ık, and John Langford. A reliable effective terascalelinear learning system. Journal of Machine Learning Research , 15(1):1111–1133, 2014.Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization withsparsity-inducing penalties. Foundations and Trends Rin Machine Learning , 4(1):1–106, 2012.Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream-ing submodular maximization: Massive data summarization on the fly. In SIGKDD , pp. 671–680.ACM, 2014.Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub-modular secretary problem and extensions. In Approximation, Randomization, and CombinatorialOptimization. Algorithms and Techniques , pp. 39–52. Springer, 2010.Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC , pp. 380–388, May 2002.Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neurallanguage models. arXiv preprint arXiv:1512.04906 , 2015.Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In InternationalConference on World Wide Web , 2010.Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarizedneural networks: Training neural networks with weights and activations constrained to +1 or -1.arXiv preprint arXiv:1602.02830 , 2016.M. Datar, N. Immorlica, P. Indyk, and V .S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Symposium on Computational Geometry , pp. 253–262,2004.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science ,1990.Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas.Predicting parameters in deep learning. In NIPS , pp. 2148–2156, 2013.Uriel Feige. A threshold of ln n for approximating set cover. JACM , 45(4):634–652, 1998.Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximatenearest neighbor search. In CVPR , June 2013.Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learningbinary codes. In CVPR , June 2011.Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net-works using vector quantization. arXiv preprint arXiv:1412.6115 , 2014.Edouard Grave, Armand Joulin, Moustapha Ciss ́e, David Grangier, and Herv ́e J ́egou. Efficientsoftmax approximation for gpus. arXiv preprint arXiv:1609.04309 , 2016.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. In ICLR , 2016.Herv ́e J ́egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometricconsistency for large scale image search. In ECCV , October 2008.Herv ́e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighborsearch. IEEE Trans. PAMI , January 2011.Thorsten Joachims. Text categorization with support vector machines: Learning with many relevantfeatures . Springer, 1998.9Under review as a conference paper at ICLR 2017Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficienttext classification. arXiv preprint arXiv:1607.01759 , 2016.Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS , 2:598–605, 1990.Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks withfew multiplications. arXiv preprint arXiv:1510.03009 , 2015.Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classifi-cation. In AAAI workshop on learning for text categorization , 1998.Lukas Meier, Sara Van De Geer, and Peter B ̈uhlmann. The group lasso for logistic regression.Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 70(1):53–71, 2008.Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis . VUT Brno,2012.Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky.Subword language modeling with neural networks. preprint , 2012.Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search.InICML , pp. 1926–1934, 2015.Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR , June 2013.Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor-mation retrieval , 2008.Alexandre Sablayrolles, Matthijs Douze, Herv ́e J ́egou, and Nicolas Usunier. How should we evalu-ate supervised hashing? arXiv preprint arXiv:1609.06753 , 2016.Jorge S ́anchez and Florent Perronnin. High-dimensional signature compression for large-scale im-age classification. In CVPR , 2011.Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner productsearch. In NIPS , pp. 2321–2329, 2014.Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025 ,2000.David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. InACL, 2008.Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland,Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica-tions of the ACM , 2016.Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: Asurvey. arXiv preprint arXiv:1408.2927 , 2014.Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - Asurvey. CoRR , abs/1509.05472, 2015.Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In ACL, 2012.Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Featurehashing for large scale multitask learning. In ICML , 2009.Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS , December 2009.Yijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combiningconvolution and recurrent layers. arXiv preprint arXiv:1602.00367 , 2016.Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas-sification. In NIPS , 2015.10Under review as a conference paper at ICLR 2017APPENDIXIn the appendix, we show some additional results. The model used in these experiments only had1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8differentdatasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8show a thorough comparison of the hashing trick and the Bloom filters.Quant. k norm AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 72 120M 63.7 56M 95.7 53Mfull,nodict 92.1 34M 59.9 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.6 48M 95.6 46MLSH 8 88.7 8.5M 51.3 20M 90.3 21M 92.7 14M 94.2 11M 54.8 23M 56.7 12M 92.2 12MPQ 8 91.7 8.5M 59.3 20M 94.4 21M 97.4 14M 96.1 11M 71.3 23M 62.8 12M 95.4 12MOPQ 8 91.9 8.5M 59.3 20M 94.4 21M 96.9 14M 95.8 11M 71.4 23M 62.5 12M 95.4 12MLSH 8 x 91.9 9.5M 59.4 22M 94.5 24M 97.8 16M 96.2 12M 71.6 26M 63.4 14M 95.6 13MPQ 8 x 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13MOPQ 8 x 92.1 9.5M 59.9 22M 94.5 24M 98.4 16M 96.3 12M 72.2 26M 63.6 14M 95.6 13MLSH 4 88.3 4.3M 50.5 9.7M 88.9 11M 91.6 7.0M 94.3 5.3M 54.6 12M 56.5 6.0M 92.9 5.7MPQ 4 91.6 4.3M 59.2 9.7M 94.4 11M 96.3 7.0M 96.1 5.3M 71.0 12M 62.2 6.0M 95.4 5.7MOPQ 4 91.7 4.3M 59.0 9.7M 94.4 11M 96.9 7.0M 95.6 5.3M 71.2 12M 62.6 6.0M 95.4 5.7MLSH 4 x 92.1 5.3M 59.2 13M 94.4 13M 97.7 8.8M 96.2 6.6M 71.1 15M 63.1 7.4M 95.5 7.2MPQ 4 x 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72.0 15M 63.6 7.5M 95.6 7.2MOPQ 4 x 92.2 5.3M 59.8 13M 94.5 13M 98.3 8.8M 96.3 6.6M 72.1 15M 63.7 7.5M 95.6 7.2MLSH 2 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9MPQ 2 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9MOPQ 2 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9MLSH 2 x 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3MPQ 2 x 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3MOPQ 2 x 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3MTable 5: Comparison between standard quantization methods. The original model has a dimension-ality of 8and2M buckets. Note that all of the methods are without dictionary.k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full, nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M8 full 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M4 full 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M2 full 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M8 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M8 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K8 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M4 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K4 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K4 10K 91.5 98K 58.6 98K 93.2 98K 98.5 98K 96.5 98K 71.2 98K 63.3 98K 95.4 98K2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M2 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K2 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K2 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KTable 6: Comparison with different quantization and level of pruning. “co” is the cut-off parameterof the pruning.11Under review as a conference paper at ICLR 2017Dataset Zhang et al. (2015) Xiao & Cho (2016) fastText +PQ,k=d=2AG 90.2 108M 91.4 80M 91.9 889KAmz. f. 59.5 10.8M 59.2 1.6M 59.6 449KAmz. p. 94.5 10.8M 94.1 1.6M 94.3 449KDBP 98.3 108M 98.6 1.2M 98.5 98KSogou 95.1 108M 95.2 1.6M 96.5 98KYah. 70.5 108M 71.4 80M 71.7 889KYelp f. 61.6 108M 61.8 1.4M 63.3 98KYelp p. 94.8 108M 94.5 1.2M 95.5 449KTable 7: Comparison between CNNs and fastText with and without quantization. The numbersfor Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we reportthe size of the model under the assumption that they use float32 storage. For fastText (+PQ) wereport the memory used in RAM at test time.Quant. Bloom co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full,nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46MNPQ 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4MNPQ x 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830KNPQ 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693KNPQ x 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420KNPQ 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352KNPQ x 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215KNPQ 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KNPQ x 10K 90.8 51K 56.8 51K 91.7 51K 98.1 51K 96.1 51K 68.7 51K 61.7 51K 94.5 51KTable 8: Comparison with and without Bloom filters. For NPQ, we set d= 8andk= 2.12Under review as a conference paper at ICLR 2017Model k norm retrain Acc. Sizefull 45.4 12GInput 128 45.0 1.7GInput 128 x 45.3 1.8GInput 128 x x 45.5 1.8GInput+Output 128 x 45.2 1.5GInput+Output 128 x x 45.4 1.5GInput+Output, co=2M 128 x x 45.5 305MInput+Output, n co=1M 128 x x 43.9 179MInput 64 44.0 1.1GInput 64 x 44.7 1.1GInput 64 x 44.9 1.1GInput+Output 64 x 44.6 784MInput+Output 64 x x 44.8 784MInput+Output, co=2M 64 x 42.5 183MInput+Output, co=1M 64 x 39.9 118MInput+Output, co=2M 64 x x 45.0 183MInput+Output, co=1M 64 x x 43.4 118MInput 32 40.5 690MInput 32 x 42.4 701MInput 32 x x 42.9 701MInput+Output 32 x 42.3 435MInput+Output 32 x x 42.8 435MInput+Output, co=2M 32 x 35.0 122MInput+Output, co=1M 32 x 32.6 88MInput+Output, co=2M 32 x x 43.3 122MInput+Output, co=1M 32 x x 41.6 88MTable 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param-eters, (ii) with or without re-training.13
H1PiU1UVg
SJc1hL5ee
ICLR.cc/2017/conference/-/paper324/official/review
{"title": "Effective if simple combination of existing techniques for text-classifier compression", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes a series of tricks for compressing fast (linear) text classification models. The paper is clearly written, and the results are quite strong. The main compression is achieved via product quantization, a technique which has been explored in other applications within the neural network model compression literature. In addition to the Gong et al. work which was cited, it would be worth mentioning Quantized Convolutional Neural Networks for Mobile Devices (CVPR 2016, https://arxiv.org/pdf/1512.06473v3.pdf), which similarly incorporates fine tuning to mitigate losses due to quantization error.\n\nAs such, one criticism of the paper is that it is a more-or-less straightforward application of techniques that have already been shown to be effective elsewhere in the model compression literature, and so isn't particularly surprising or deep from a technical perspective. However, this is as far as I am aware the first work applying these techniques to text classification, and the results are strong enough that I think it will be of interest to those working on models for text-based tasks.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
FastText.zip: Compressing text classification models
["Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Matthijs Douze", "Herve Jegou", "Tomas Mikolov"]
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store the word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent the quantization artifacts. As a result, our approach produces a text classifier, derived from the fastText approach, which at test time requires only a fraction of the memory compared to the original one, without noticeably sacrificing the quality in terms of classification accuracy. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
["Natural language processing", "Supervised Learning", "Applications"]
https://openreview.net/forum?id=SJc1hL5ee
https://openreview.net/pdf?id=SJc1hL5ee
https://openreview.net/forum?id=SJc1hL5ee&noteId=H1PiU1UVg
Under review as a conference paper at ICLR 2017FASTTEXT.ZIP:COMPRESSING TEXT CLASSIFICATION MODELSArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Herv ́e J ́egou & Tomas MikolovFacebook AI Researchfajoulin,egrave,bojanowski,matthijs,rvj,tmikolov g@fb.comABSTRACTWe consider the problem of producing compact architectures for text classifica-tion, such that the full model fits in a limited amount of memory. After consid-ering different solutions inspired by the hashing literature, we propose a methodbuilt upon product quantization to store word embeddings. While the originaltechnique leads to a loss in accuracy, we adapt this method to circumvent quan-tization artefacts. Combined with simple approaches specifically adapted to textclassification, our approach derived from fastText requires, at test time, onlya fraction of the memory compared to the original FastText, without noticeablysacrificing quality in terms of classification accuracy. Our experiments carried outon several benchmarks show that our approach typically requires two orders ofmagnitude less memory than fastText while being only slightly inferior withrespect to accuracy. As a result, it outperforms the state of the art by a good marginin terms of the compromise between memory usage and accuracy.1 I NTRODUCTIONText classification is an important problem in Natural Language Processing (NLP). Real world use-cases include spam filtering or e-mail categorization. It is a core component in more complex sys-tems such as search and ranking. Recently, deep learning techniques based on neural networkshave achieved state of the art results in various NLP applications. One of the main successes of deeplearning is due to the effectiveness of recurrent networks for language modeling and their applicationto speech recognition and machine translation (Mikolov, 2012). However, in other cases includingseveral text classification problems, it has been shown that deep networks do not convincingly beatthe prior state of the art techniques (Wang & Manning, 2012; Joulin et al., 2016).In spite of being (typically) orders of magnitude slower to train than traditional techniques basedon n-grams, neural networks are often regarded as a promising alternative due to compact modelsizes, in particular for character based models. This is important for applications that need to run onsystems with limited memory such as smartphones.This paper specifically addresses the compromise between classification accuracy and the modelsize. We extend our previous work implemented in the fastText library1. It is based on n-gramfeatures, dimensionality reduction, and a fast approximation of the softmax classifier (Joulin et al.,2016). We show that a few key ingredients, namely feature pruning, quantization, hashing, and re-training, allow us to produce text classification models with tiny size, often less than 100kB whentrained on several popular datasets, without noticeably sacrificing accuracy or speed.We plan to publish the code and scripts required to reproduce our results as an extension of thefastText library, thereby providing strong reproducible baselines for text classifiers that optimizethe compromise between the model size and accuracy. We hope that this will help the engineeringcommunity to improve existing applications by using more efficient models.This paper is organized as follows. Section 2 introduces related work, Section 3 describes our textclassification model and explains how we drastically reduce the model size. Section 4 shows theeffectiveness of our approach in experiments on multiple text classification benchmarks.1https://github.com/facebookresearch/fastText1Under review as a conference paper at ICLR 20172 R ELATED WORKModels for text classification. Text classification is a problem that has its roots in many applica-tions such as web search, information retrieval and document classification (Deerwester et al., 1990;Pang & Lee, 2008). Linear classifiers often obtain state-of-the-art performance while being scal-able (Agarwal et al., 2014; Joachims, 1998; Joulin et al., 2016; McCallum & Nigam, 1998). Theyare particularly interesting when associated with the right features (Wang & Manning, 2012). Theyusually require storing embeddings for words and n-grams, which makes them memory inefficient.Compression of language models. Our work is related to compression of statistical languagemodels. Classical approaches include feature pruning based on entropy (Stolcke, 2000) and quanti-zation. Pruning aims to keep only the most important n-grams in the model, leaving out those withprobability lower than a specified threshold. Further, the individual n-grams can be compressed byquantizing the probability value, and by storing the n-gram itself more efficiently than as a sequenceof characters. Various strategies have been developed, for example using tree structures or hashfunctions, and are discussed in (Talbot & Brants, 2008).Compression for similarity estimation and search. There is a large body of literature on howto compress a set of vectors into compact codes, such that the comparison of two codes approxi-mates a target similarity in the original space. The typical use-case of these methods considers anindexed dataset of compressed vectors, and a query for which we want to find the nearest neigh-bors in the indexed set. One of the most popular is Locality-sensitive hashing (LSH) by Charikar(2002), which is a binarization technique based on random projections that approximates the cosinesimilarity between two vectors through a monotonous function of the Hamming distance betweenthe two corresponding binary codes. In our paper, LSH refers to this binarization strategy2. Manysubsequent works have improved this initial binarization technique, such as spectal hashing (Weisset al., 2009), or Iterative Quantization (ITQ) (Gong & Lazebnik, 2011), which learns a rotation ma-trix minimizing the quantization loss of the binarization. We refer the reader to two recent surveysby Wang et al. (2014) and Wang et al. (2015) for an overview of the binary hashing literature.Beyond these binarization strategies, more general quantization techniques derived from Jegou et al.(2011) offer better trade-offs between memory and the approximation of a distance estimator. TheProduct Quantization (PQ) method approximates the distances by calculating, in the compressed do-main, the distance between their quantized approximations. This method is statistically guaranteedto preserve the Euclidean distance between the vectors within an error bound directly related to thequantization error. The original PQ has been concurrently improved by Ge et al. (2013) and Norouzi& Fleet (2013), who learn an orthogonal transform minimizing the overall quantization loss. In ourpaper, we will consider the Optimized Product Quantization (OPQ) variant (Ge et al., 2013).Softmax approximation The aforementioned works approximate either the Euclidean distanceor the cosine similarity (both being equivalent in the case of unit-norm vectors). However, in thecontext of fastText , we are specifically interested in approximating the maximum inner productinvolved in a softmax layer. Several approaches derived from LSH have been recently proposedto achieve this goal, such as Asymmetric LSH by Shrivastava & Li (2014), subsequently discussedby Neyshabur & Srebro (2015). In our work, since we are not constrained to purely binary codes,we resort a more traditional encoding by employing a magnitude/direction parametrization of ourvectors. Therefore we only need to encode/compress an unitary d-dimensional vector, which fits theaforementioned LSH and PQ methods well.Neural network compression models. Recently, several research efforts have been conductedto compress the parameters of architectures involved in computer vision, namely for state-of-the-art Convolutional Neural Networks (CNNs) (Han et al., 2016; Lin et al., 2015). Some use vectorquantization (Gong et al., 2014) while others binarize the network (Courbariaux et al., 2016). Denilet al. (2013) show that such classification models are easily compressed because they are over-parametrized, which concurs with early observations by LeCun et al. (1990).2In the literature, LSH refers to multiple distinct strategies related to the Johnson-Lindenstrauss lemma.For instance, LSH sometimes refers to a partitioning technique with random projections allowing for sublinearsearch viacell probes, see for instance the E2LSH variant of Datar et al. (2004).2Under review as a conference paper at ICLR 2017Some of these works both aim at reducing the model size and the speed. In our case, since thefastText classifier on which our proposal is built upon is already very efficient, we are primilarlyinterested in reducing the size of the model while keeping a comparable classification efficiency.3 P ROPOSED APPROACH3.1 T EXT CLASSIFICATIONIn the context of text classification, linear classifiers (Joulin et al., 2016) remain competitive withmore sophisticated, deeper models, and are much faster to train. On top of standard tricks commonlyused in linear text classification (Agarwal et al., 2014; Wang & Manning, 2012; Weinberger et al.,2009), Joulin et al. (2016) use a low rank constraint to reduce the computation burden while sharinginformation between different classes. This is especially useful in the case of a large output space,where rare classes may have only a few training examples. In this paper, we focus on a similarmodel, that is, which minimizes the softmax loss `overNdocuments:NXn=1`(yn; BAx n); (1)where xnis a bag of one-hot vectors and ynthe label of the n-th document. In the case of a largevocabulary and a large output space, the matrices AandBare big and can require gigabytes ofmemory. Below, we describe how we reduce this memory usage.3.2 B OTTOM -UP PRODUCT QUANTIZATIONProduct quantization is a popular method for compressed-domain approximate nearest neighborsearch (Jegou et al., 2011). As a compression technique, it approximates a real-valued vector byfinding the closest vector in a pre-defined structured set of centroids, referred to as a codebook.This codebook is not enumerated, since it is extremely large. Instead it is implicitly defined by itsstructure: a d-dimensional vector x2Rdis approximated as^x=kXi=1qi(x); (2)where the different subquantizers qi:x7!qi(x)are complementary in the sense that their respectivecentroids lie in distinct orthogonal subspaces, i.e.,8i6=j;8x; y;hqi(x)jqj(y)i= 0. In the originalPQ, the subspaces are aligned with the natural axis, while OPQ learns a rotation, which amounts toalleviating this constraint and to not depend on the original coordinate system. Another way to seethis is to consider that PQ splits a given vector xintoksubvectors xi,i= 1: : : k , each of dimensiond=k:x= [x1: : : xi: : : xk], and quantizes each sub-vector using a distinct k-means quantizer. Eachsubvector xiis thus mapped to the closest centroid amongst 2bcentroids, where bis the number ofbits required to store the quantization index of the subquantizer, typically b= 8. The reconstructedvector can take 2kbdistinct reproduction values, and is stored in kbbits.PQ estimates the inner product in the compressed domain asx>y^x>y=kXi=1qi(xi)>yi: (3)This is a straightforward extension of the square L2 distance estimation of Jegou et al. (2011). Inpractice, the vector estimate ^xis trivially reconstructed from the codes, i.e., from the quantizationindexes, by concatenating these centroids.The two parameters involved in PQ, namely the number of subquantizers kand the number of bits bper quantization index, are typically set to k2[2; d=2], andb= 8to ensure byte-alignment.Discussion. PQ offers several interesting properties in our context of text classification. Firstly,the training is very fast because the subquantizers have a small number of centroids, i.e., 256 cen-troids for b= 8. Secondly, at test time it allows the reconstruction of the vectors with almost no3Under review as a conference paper at ICLR 2017computational and memory overhead. Thirdly, it has been successfully applied in computer vision,offering much better performance than binary codes, which makes it a natural candidate to compressrelatively shallow models. As observed by S ́anchez & Perronnin (2011), using PQ just before thelast layer incurs a very limited loss in accuracy when combined with a support vector machine.In the context of text classification, the norms of the vectors are widely spread, typically with a ratioof 1000 between the max and the min. Therefore kmeans performs poorly because it optimizes anabsolute error objective, so it maps all low-norm vectors to 0. A simple solution is to separate thenorm and the angle of the vectors and to quantize them separately. This allows a quantization withno loss of performance, yet requires an extra bbits per vector.Bottom-up strategy: re-training. The first works aiming at compressing CNN models like theone proposed by (Gong et al., 2014) used the reconstruction from off-the-shelf PQ, i.e., without anyre-training. However, as observed in Sablayrolles et al. (2016), when using quantization methodslike PQ, it is better to re-train the layers occurring after the quantization, so that the network canre-adjust itself to the quantization. There is a strong argument arguing for this re-training strategy:the square magnitude of vectors is reduced, on average, by the average quantization error for anyquantizer satisfying the Lloyd conditions; see Jegou et al. (2011) for details.This suggests a bottom-up learning strategy where we first quantize the input matrix, then retrainand quantize the output matrix (the input matrix being frozen). Experiments in section 4 show thatit is worth adopting this strategy.Memory savings with PQ. In practice, the bottom-up PQ strategy offers a compression factor of10 without any noticeable loss of performance. Without re-training, we notice a drop in accuracybetween 0:1%and0:5%, depending on the dataset and setting; see Section 4 and the appendix.3.3 F URTHER TEXT SPECIFIC TRICKSThe memory usage strongly depends on the size of the vocabulary, which can be large in manytext classification tasks. While it is clear that a large part of the vocabulary is useless or redundant,directly reducing the vocabulary to the most frequent words is not satisfactory: most of the frequentwords, like “the” or “is” are not discriminative, in contrast to some rare words, e.g., in the context oftag prediction. In this section, we discuss a few heuristics to reduce the space taken by the dictionary.They lead to major memory reduction, in extreme cases by a factor 100. We experimentally showthat this drastic reduction is complementary with the PQ compression method, meaning that thecombination of both strategies reduces the model size by a factor up to 1000 for some datasets.Pruning the vocabulary. Discovering which word or n-gram must be kept to preserve the overallperformance is a feature selection problem. While many approaches have been proposed to selectgroups of variables during training (Bach et al., 2012; Meier et al., 2008), we are interested inselecting a fixed subset of Kwords and ngrams from a pre-trained model. This can be achieved byselecting the Kembeddings that preserve as much of the model as possible, which can be reducedto selecting the Kwords and ngrams associated with the highest norms.While this approach offers major memory savings, it has one drawback occurring in some particularcases: some documents may not contained any of the Kbest features, leading to a significant dropin performance. It is thus important to keep the Kbest features under the condition that they coverthe whole training set. More formally, the problem is to find a subset Sin the feature setVthatmaximizes the sum of their norms wsunder the constraint that all the documents in the training setDare covered:maxSVXs2Sws s.t.jSj K; P 1S1D;where Pis a matrix such that Pds= 1 if the s-th feature is in the d-th document, and 0otherwise.This problem is directly related to set covering problems that are NP-hard (Feige, 1998). Standardgreedy approaches require the storing of an inverted index or to do multiple passes over the dataset,which is prohibitive on very large dataset (Chierichetti et al., 2010). This problem can be cast asan instance of online submodular maximization with a rank constraint (Badanidiyuru et al., 2014;4Under review as a conference paper at ICLR 20172 4 894.094.595.095.596.096.5accuracySogou2 4 8number of bytes69.570.070.571.071.572.072.5YahooFull PQ OPQ LSH, norm PQ, norm OPQ, norm2 4 862.062.462.863.263.6Yelp fullFigure 1: Accuracy as a function of the memory per vector/embedding on 3datasets from Zhanget al. (2015). Note, an extra byte is required when we encode the norm explicitly (”norm”).Bateni et al., 2010). In our case, we use a simple online parallelizable greedy approach: For eachdocument, we verify if it is already covered by a retained feature and, if not, we add the feature withthe highest norm to our set of retained features. If the number of features is below k, we add thefeatures with the highest norm that have not yet been picked.Hashing trick & Bloom filter. On small models, the dictionary can take a significant portion ofthe memory. Instead of saving it, we extend the hashing trick used in Joulin et al. (2016) to bothwords and n-grams. This strategy is also used in V owpal Wabbit (Agarwal et al., 2014) in the contextof online training. This allows us to save around 1-2Mb with almost no overhead at test time (justthe cost of computing the hashing function).Pruning the vocabulary while using the hashing trick requires keeping a list of the indices of theKremaining buckets. At test time, a binary search over the list of indices is required. It has acomplexity of O(log(K))and a memory overhead of a few hundreds of kilobytes. Using Bloomfilters instead reduces the complexity O(1)at test time and saves a few hundred kilobytes. However,in practice, it degrades performance.4 E XPERIMENTSThis section evaluates the quality of our model compression pipeline and compare it to other com-pression methods on different text classification problems, and to other compact text classifiers.Evaluation protocol and datasets. Our experimental pipeline is as follows: we train a modelusing fastText with the default setting unless specified otherwise. That is 2M buckets, a learningrate of 0:1and10training epochs. The dimensionality dof the embeddings is set to powers of 2toavoid border effects that could make the interpretation of the results more difficult. As baselines, weuse Locality-Sensitive Hashing (LSH) (Charikar, 2002), PQ (Jegou et al., 2011) and OPQ (Ge et al.,2013) (the non-parametric variant). Note that we use an improved version of LSH where randomorthogonal matrices are used instead of random matrix projection J ́egou et al. (2008). In a firstseries of experiments, we use the 8datasets and evaluation protocol of Zhang et al. (2015). Thesedatasets contain few million documents and have at most 10classes. We also explore the limit ofquantization on a dataset with an extremely large output space, that is a tag dataset extracted fromthe YFCC100M collection (Thomee et al., 2016)3, referred to as FlickrTag in the rest of this paper.5Under review as a conference paper at ICLR 2017-2-10AG Amazon full-2-10Amazon polarity DBPedia-2-10Sogou Yahoo100kB 1MB 10MB 100MB-2-10Yelp full100kB 1MB 10MB 100MBYelp polarityFull PQ Pruned Zhang et al. (2015) Xiao & Cho (2016)Figure 2: Loss of accuracy as a function of the model size. We compare the compressed model withdifferent level of pruning with NPQ and the full fastText model. We also compare with Zhanget al. (2015) and Xiao & Cho (2016). Note that the size is in log scale.4.1 S MALL DATASETSCompression techniques. We compare three popular methods used for similarity estimation withcompact codes: LSH, PQ and OPQ on the datasets released by Zhang et al. (2015). Figure 1 showsthe accuracy as a function of the number of bytes used per embedding, which corresponds to thenumber kof subvectors in the case of PQ and OPQ. See more results in the appendix. As discussedin Section 2, LSH reproduces the cosine similarity and is therefore not adapted to un-normalizeddata. Therefore we only report results with normalization. Once normalized, PQ and OPQ arealmost lossless even when using only k= 4subquantizers per embedding (equivalently, bytes). Weobserve in practice that using k=d=2,i.e., half of the components of the embeddings, works well inpractice. In the rest of the paper and if not stated otherwise, we focus on this setting. The differencebetween the normalized versions of PQ and OPQ is limited and depends on the dataset. Thereforewe adopt the normalized PQ (NPQ) for the rest of this study, since it is faster to train.word Entropy Norm word Entropy Norm. 1 354 mediocre 1399 1, 2 176 disappointing 454 2the 3 179 so-so 2809 3and 4 1639 lacks 1244 4i 5 2374 worthless 1757 5a 6 970 dreadful 4358 6to 7 1775 drm 6395 7it 8 1956 poorly 716 8of 9 2815 uninspired 4245 9this 10 3275 worst 402 10Table 1: Best ranked words w.r.t. entropy ( left) and norm ( right ) on the Amazon full review dataset.We give the rank for both criteria. The norm ranking filters out words carrying little information.3Data available at https://research.facebook.com/research/fasttext/6Under review as a conference paper at ICLR 2017Dataset full 64KiB 32KiB 16KiBAG 65M 92.1 91.4 90.6 89.1Amazon full 108M 60.0 58.8 56.0 52.9Amazon pol. 113M 94.5 93.3 92.1 89.3DBPedia 87M 98.4 98.2 98.1 97.4Sogou 73M 96.4 96.4 96.3 95.5Yahoo 122M 72.1 70.0 69.0 69.2Yelp full 78M 63.8 63.2 62.4 58.7Yelp pol. 77M 95.7 95.3 94.9 93.2Average diff. [ %] 0 -0.8 -1.7 -3.5Table 2: Performance on very small models. We use a quantization with k= 1, hashing and anextreme pruning. The last row shows the average drop of performance for different size.Pruning. Figure 2 shows the performance of our model with different sizes. We fix k=d=2anduse different pruning thresholds. NPQ offers a compression rate of 10compared to the full model.As the pruning becomes more agressive, the overall compression can increase up up to 1;000with little drop of performance and no additional overhead at test time. In fact, using a smallerdictionary makes the model faster at test time. We also compare with character-level ConvolutionalNeural Networks (CNN) (Zhang et al., 2015; Xiao & Cho, 2016). They are attractive models fortext classification because they achieve similar performance with less memory usage than linearmodels (Xiao & Cho, 2016). Even though fastText with the default setting uses more memory,NPQ is already on par with CNNs’ memory usage. Note that CNNs are not quantized, and it wouldbe worth seeing how much they can be quantized with no drop of performance. Such a study isbeyond the scope of this paper. Our pruning is based on the norm of the embeddings accordingto the guidelines of Section 3.3. Table 1 compares the ranking obtained with norms to the rankingobtained using entropy, which is commonly used in unsupervised settings Stolcke (2000).Extreme compression. Finally, in Table 2, we explore the limit of quantized model by lookingat the performance obtained for models under 64KiB. Surprisingly, even at 64KiB and 32KiB, thedrop of performance is only around 0:8%and1:7%despite a compression rate of 1;0004;000.4.2 L ARGE DATASET : FLICKR TAGIn this section, we explore the limit of compression algorithms on very large datasets. Similarto Joulin et al. (2016), we consider a hashtag prediction dataset containing 312;116labels. We setthe minimum count for words at 10, leading to a dictionary of 1;427;667words. We take 10Mbuckets for n-grams and a hierarchical softmax. We refer to this dataset as FlickrTag.Output encoding. We are interested in understanding how the performance degrades if the classi-fier is also quantized ( i.e., the matrix Bin Eq. 1) and when the pruning is at the limit of the minimumnumber of features required to cover the full dataset.Model k norm retrain Acc. Sizefull (uncompressed) 45.4 12 GiBInput 128 45.0 1.7 GiBInput 128 x 45.3 1.8 GiBInput 128 x x 45.5 1.8 GiBInput+Output 128 x 45.2 1.5 GiBInput+Output 128 x x 45.4 1.5 GiBTable 3: FlickrTag: Influence of quantizing the output matrix on performance. We use PQ forquantization with an optional normalization. We also retrain the output matrix after quantizing theinput one. The ”norm” refers to the separate encoding of the magnitude and angle, while ”retrain”refers to the re-training bottom-up PQ method described in Section 3.2.7Under review as a conference paper at ICLR 2017Table 3 shows that quantizing both the “input” matrix ( i.e.,Ain Eq. 1) and the “output” matrix ( i.e.,B) does not degrade the performance compared to the full model. We use embeddings with d= 256dimensions and use k=d=2subquantizers. We do not use any text specific tricks, which leads toa compression factor of 8. Note that even if the output matrix is not retrained over the embeddings,the performance is only 0:2%away from the full model. As shown in the Appendix, using lesssubquantizers significantly decreases the performance for a small memory gain.Model full Entropy pruning Norm pruning Max-Cover pruning#embeddings 11.5M 2M 1M 2M 1M 2M 1MMemory 12GiB 297MiB 174MiB 305MiB 179MiB 305MiB 179MiBCoverage [ %] 88.4 70.5 70.5 73.2 61.9 88.4 88.4Accuracy 45.4 32.1 30.5 41.6 35.8 45.5 43.9Table 4: FlickrTag: Comparison of entropy pruning, norm pruning and max-cover pruning methods.We show the coverage of the test set for each method.Pruning. Table 4 shows how the performance evolves with pruning. We measure this effect on topof a fully quantized model. The full model misses 11:6%of the test set because of missing words(some documents are either only composed of hashtags or have only rare words). There are 312;116labels and thus it seems reasonable to keep embeddings in the order of the million. A naive pruningwith1M features misses about 3040% of the test set, leading to a significant drop of performance.On the other hand, even though the max-coverage pruning approach was set on the train set, it doesnot suffer from any coverage loss on the test set. This leads to a smaller drop of performance. If thepruning is too aggressive, however, the coverage decreases significantly.5 F UTURE WORKIt may be possible to obtain further reduction of the model size in the future. One idea is to conditionthe size of the vectors (both for the input features and the labels) based on their frequency (Chenet al., 2015; Grave et al., 2016). For example, it is probably not worth representing the rare labelsby full 256-dimensional vectors in the case of the FlickrTag dataset. Thus, conditioning the vectorsize on the frequency and norm seems like an interesting direction to explore in the future.We may also consider combining the entropy and norm pruning criteria: instead of keeping thefeatures in the model based just on the frequency or the norm, we can use both to keep a good set offeatures. This could help to keep features that are both frequent and discriminative, and thereby toreduce the coverage problem that we have observed.Additionally, instead of pruning out the less useful features, we can decompose them into smallerunits (Mikolov et al., 2012). For example, this can be achieved by splitting every non-discriminativeword into a sequence of character trigrams. This could help in cases where training and test examplesare very short (for example just a single word).6 C ONCLUSIONIn this paper, we have presented several simple techniques to reduce, by several orders of magnitude,the memory complexity of certain text classifiers without sacrificing accuracy nor speed. This isachieved by applying discriminative pruning which aims to keep only important features in thetrained model, and by performing quantization of the weight matrices and hashing of the dictionary.We will publish the code as an extension of the fastText library. We hope that our work willserve as a baseline to the research community, where there is an increasing interest for comparingthe performance of various deep learning text classifiers for a given number of parameters. Overall,compared to recent work based on convolutional neural networks, fastText.zip is often moreaccurate, while requiring several orders of magnitude less time to train on common CPUs, andincurring a fraction of the memory complexity.8Under review as a conference paper at ICLR 2017REFERENCESAlekh Agarwal, Olivier Chapelle, Miroslav Dud ́ık, and John Langford. A reliable effective terascalelinear learning system. Journal of Machine Learning Research , 15(1):1111–1133, 2014.Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization withsparsity-inducing penalties. Foundations and Trends Rin Machine Learning , 4(1):1–106, 2012.Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Stream-ing submodular maximization: Massive data summarization on the fly. In SIGKDD , pp. 671–680.ACM, 2014.Mohammad Hossein Bateni, Mohammad Taghi Hajiaghayi, and Morteza Zadimoghaddam. Sub-modular secretary problem and extensions. In Approximation, Randomization, and CombinatorialOptimization. Algorithms and Techniques , pp. 39–52. Springer, 2010.Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC , pp. 380–388, May 2002.Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neurallanguage models. arXiv preprint arXiv:1512.04906 , 2015.Flavio Chierichetti, Ravi Kumar, and Andrew Tomkins. Max-cover in map-reduce. In InternationalConference on World Wide Web , 2010.Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarizedneural networks: Training neural networks with weights and activations constrained to +1 or -1.arXiv preprint arXiv:1602.02830 , 2016.M. Datar, N. Immorlica, P. Indyk, and V .S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Proceedings of the Symposium on Computational Geometry , pp. 253–262,2004.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science ,1990.Misha Denil, Babak Shakibi, Laurent Dinh, Marc-Aurelio Ranzato, and Nando et all de Freitas.Predicting parameters in deep learning. In NIPS , pp. 2148–2156, 2013.Uriel Feige. A threshold of ln n for approximating set cover. JACM , 45(4):634–652, 1998.Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization for approximatenearest neighbor search. In CVPR , June 2013.Yunchao Gong and Svetlana Lazebnik. Iterative quantization: A procrustean approach to learningbinary codes. In CVPR , June 2011.Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net-works using vector quantization. arXiv preprint arXiv:1412.6115 , 2014.Edouard Grave, Armand Joulin, Moustapha Ciss ́e, David Grangier, and Herv ́e J ́egou. Efficientsoftmax approximation for gpus. arXiv preprint arXiv:1609.04309 , 2016.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. In ICLR , 2016.Herv ́e J ́egou, Matthijs Douze, and Cordelia Schmid. Hamming embedding and weak geometricconsistency for large scale image search. In ECCV , October 2008.Herv ́e Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighborsearch. IEEE Trans. PAMI , January 2011.Thorsten Joachims. Text categorization with support vector machines: Learning with many relevantfeatures . Springer, 1998.9Under review as a conference paper at ICLR 2017Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficienttext classification. arXiv preprint arXiv:1607.01759 , 2016.Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. NIPS , 2:598–605, 1990.Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. Neural networks withfew multiplications. arXiv preprint arXiv:1510.03009 , 2015.Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text classifi-cation. In AAAI workshop on learning for text categorization , 1998.Lukas Meier, Sara Van De Geer, and Peter B ̈uhlmann. The group lasso for logistic regression.Journal of the Royal Statistical Society: Series B (Statistical Methodology) , 70(1):53–71, 2008.Tomas Mikolov. Statistical language models based on neural networks. In PhD thesis . VUT Brno,2012.Tomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and J Cernocky.Subword language modeling with neural networks. preprint , 2012.Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search.InICML , pp. 1926–1934, 2015.Mohammad Norouzi and David Fleet. Cartesian k-means. In CVPR , June 2013.Bo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in infor-mation retrieval , 2008.Alexandre Sablayrolles, Matthijs Douze, Herv ́e J ́egou, and Nicolas Usunier. How should we evalu-ate supervised hashing? arXiv preprint arXiv:1609.06753 , 2016.Jorge S ́anchez and Florent Perronnin. High-dimensional signature compression for large-scale im-age classification. In CVPR , 2011.Anshumali Shrivastava and Ping Li. Asymmetric LSH for sublinear time maximum inner productsearch. In NIPS , pp. 2321–2329, 2014.Andreas Stolcke. Entropy-based pruning of backoff language models. arXiv preprint cs/0006025 ,2000.David Talbot and Thorsten Brants. Randomized language models via perfect hash functions. InACL, 2008.Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland,Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. In Communica-tions of the ACM , 2016.Jingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: Asurvey. arXiv preprint arXiv:1408.2927 , 2014.Jun Wang, Wei Liu, Sanjiv Kumar, and Shih-Fu Chang. Learning to hash for indexing big data - Asurvey. CoRR , abs/1509.05472, 2015.Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topicclassification. In ACL, 2012.Kilian Q Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Featurehashing for large scale multitask learning. In ICML , 2009.Yair Weiss, Antonio Torralba, and Rob Fergus. Spectral hashing. In NIPS , December 2009.Yijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combiningconvolution and recurrent layers. arXiv preprint arXiv:1602.00367 , 2016.Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas-sification. In NIPS , 2015.10Under review as a conference paper at ICLR 2017APPENDIXIn the appendix, we show some additional results. The model used in these experiments only had1M ngram buckets. In Table 5, we show a thorough comparison of LSH, PQ and OPQ on 8differentdatasets. Table 7 summarizes the comparison with CNNs in terms of accuracy and size. Table 8show a thorough comparison of the hashing trick and the Bloom filters.Quant. k norm AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full 92.1 36M 59.8 97M 94.5 104M 98.4 67M 96.3 47M 72 120M 63.7 56M 95.7 53Mfull,nodict 92.1 34M 59.9 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.6 48M 95.6 46MLSH 8 88.7 8.5M 51.3 20M 90.3 21M 92.7 14M 94.2 11M 54.8 23M 56.7 12M 92.2 12MPQ 8 91.7 8.5M 59.3 20M 94.4 21M 97.4 14M 96.1 11M 71.3 23M 62.8 12M 95.4 12MOPQ 8 91.9 8.5M 59.3 20M 94.4 21M 96.9 14M 95.8 11M 71.4 23M 62.5 12M 95.4 12MLSH 8 x 91.9 9.5M 59.4 22M 94.5 24M 97.8 16M 96.2 12M 71.6 26M 63.4 14M 95.6 13MPQ 8 x 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13MOPQ 8 x 92.1 9.5M 59.9 22M 94.5 24M 98.4 16M 96.3 12M 72.2 26M 63.6 14M 95.6 13MLSH 4 88.3 4.3M 50.5 9.7M 88.9 11M 91.6 7.0M 94.3 5.3M 54.6 12M 56.5 6.0M 92.9 5.7MPQ 4 91.6 4.3M 59.2 9.7M 94.4 11M 96.3 7.0M 96.1 5.3M 71.0 12M 62.2 6.0M 95.4 5.7MOPQ 4 91.7 4.3M 59.0 9.7M 94.4 11M 96.9 7.0M 95.6 5.3M 71.2 12M 62.6 6.0M 95.4 5.7MLSH 4 x 92.1 5.3M 59.2 13M 94.4 13M 97.7 8.8M 96.2 6.6M 71.1 15M 63.1 7.4M 95.5 7.2MPQ 4 x 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72.0 15M 63.6 7.5M 95.6 7.2MOPQ 4 x 92.2 5.3M 59.8 13M 94.5 13M 98.3 8.8M 96.3 6.6M 72.1 15M 63.7 7.5M 95.6 7.2MLSH 2 87.7 2.2M 50.1 4.9M 88.9 5.2M 90.6 3.5M 93.9 2.7M 51.4 5.7M 56.6 3.0M 91.3 2.9MPQ 2 91.1 2.2M 58.7 4.9M 94.4 5.2M 87.1 3.6M 95.3 2.7M 69.5 5.7M 62.1 3.0M 95.4 2.9MOPQ 2 91.4 2.2M 58.2 4.9M 94.3 5.2M 91.6 3.6M 94.2 2.7M 69.6 5.7M 62.1 3.0M 95.4 2.9MLSH 2 x 91.8 3.2M 58.6 7.3M 94.3 7.8M 97.1 5.3M 96.1 4.0M 69.7 8.6M 62.7 4.5M 95.5 4.3MPQ 2 x 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3MOPQ 2 x 92.1 3.2M 59.5 7.3M 94.5 7.8M 98.1 5.3M 96.2 4.0M 71.5 8.6M 63.4 4.5M 95.6 4.3MTable 5: Comparison between standard quantization methods. The original model has a dimension-ality of 8and2M buckets. Note that all of the methods are without dictionary.k co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full, nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46M8 full 92.0 9.5M 59.8 22M 94.5 24M 98.4 16M 96.3 12M 72.1 26M 63.7 14M 95.6 13M4 full 92.1 5.3M 59.8 13M 94.5 13M 98.4 8.8M 96.3 6.6M 72 15M 63.6 7.5M 95.6 7.2M2 full 91.9 3.2M 59.6 7.3M 94.5 7.8M 98.1 5.3M 96.3 4.0M 71.3 8.6M 63.4 4.5M 95.6 4.3M8 200K 92.0 2.5M 59.7 2.5M 94.3 2.5M 98.5 2.5M 96.6 2.5M 71.8 2.5M 63.3 2.5M 95.6 2.5M8 100K 91.9 1.3M 59.5 1.3M 94.3 1.3M 98.5 1.3M 96.6 1.3M 71.6 1.3M 63.4 1.3M 95.6 1.3M8 50K 91.7 645K 59.7 645K 94.3 644K 98.5 645K 96.6 645K 71.5 645K 63.2 645K 95.6 644K8 10K 91.3 137K 58.6 137K 93.2 137K 98.5 137K 96.5 137K 71.3 137K 63.3 137K 95.4 137K4 200K 92.0 1.8M 59.7 1.8M 94.3 1.8M 98.5 1.8M 96.6 1.8M 71.7 1.8M 63.3 1.8M 95.6 1.8M4 100K 91.9 889K 59.5 889K 94.4 889K 98.5 889K 96.6 889K 71.7 889K 63.4 889K 95.6 889K4 50K 91.7 449K 59.6 449K 94.3 449K 98.5 450K 96.6 449K 71.4 450K 63.2 449K 95.5 449K4 10K 91.5 98K 58.6 98K 93.2 98K 98.5 98K 96.5 98K 71.2 98K 63.3 98K 95.4 98K2 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4M2 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693K2 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352K2 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KTable 6: Comparison with different quantization and level of pruning. “co” is the cut-off parameterof the pruning.11Under review as a conference paper at ICLR 2017Dataset Zhang et al. (2015) Xiao & Cho (2016) fastText +PQ,k=d=2AG 90.2 108M 91.4 80M 91.9 889KAmz. f. 59.5 10.8M 59.2 1.6M 59.6 449KAmz. p. 94.5 10.8M 94.1 1.6M 94.3 449KDBP 98.3 108M 98.6 1.2M 98.5 98KSogou 95.1 108M 95.2 1.6M 96.5 98KYah. 70.5 108M 71.4 80M 71.7 889KYelp f. 61.6 108M 61.8 1.4M 63.3 98KYelp p. 94.8 108M 94.5 1.2M 95.5 449KTable 7: Comparison between CNNs and fastText with and without quantization. The numbersfor Zhang et al. (2015) are reported from Xiao & Cho (2016). Note that for the CNNs, we reportthe size of the model under the assumption that they use float32 storage. For fastText (+PQ) wereport the memory used in RAM at test time.Quant. Bloom co AG Amz. f. Amz. p. DBP Sogou Yah. Yelp f. Yelp p.full,nodict 92.1 34M 59.8 78M 94.5 83M 98.4 56M 96.3 42M 72.2 91M 63.7 48M 95.6 46MNPQ 200K 91.9 1.4M 59.6 1.4M 94.3 1.4M 98.4 1.4M 96.5 1.4M 71.5 1.4M 63.2 1.4M 95.5 1.4MNPQ x 200K 92.2 830K 59.3 830K 94.1 830K 98.4 830K 96.5 830K 70.7 830K 63.0 830K 95.5 830KNPQ 100K 91.6 693K 59.5 693K 94.3 693K 98.4 694K 96.6 693K 71.1 694K 63.2 693K 95.6 693KNPQ x 100K 91.8 420K 59.1 420K 93.9 420K 98.4 420K 96.5 420K 70.6 420K 62.8 420K 95.3 420KNPQ 50K 91.6 352K 59.6 352K 94.3 352K 98.4 352K 96.5 352K 71.1 352K 63.2 352K 95.6 352KNPQ x 50K 91.5 215K 58.8 215K 93.6 215K 98.3 215K 96.5 215K 70.1 215K 62.7 215K 95.1 215KNPQ 10K 91.3 78K 58.5 78K 93.2 78K 98.4 79K 96.5 78K 70.8 78K 63.2 78K 95.3 78KNPQ x 10K 90.8 51K 56.8 51K 91.7 51K 98.1 51K 96.1 51K 68.7 51K 61.7 51K 94.5 51KTable 8: Comparison with and without Bloom filters. For NPQ, we set d= 8andk= 2.12Under review as a conference paper at ICLR 2017Model k norm retrain Acc. Sizefull 45.4 12GInput 128 45.0 1.7GInput 128 x 45.3 1.8GInput 128 x x 45.5 1.8GInput+Output 128 x 45.2 1.5GInput+Output 128 x x 45.4 1.5GInput+Output, co=2M 128 x x 45.5 305MInput+Output, n co=1M 128 x x 43.9 179MInput 64 44.0 1.1GInput 64 x 44.7 1.1GInput 64 x 44.9 1.1GInput+Output 64 x 44.6 784MInput+Output 64 x x 44.8 784MInput+Output, co=2M 64 x 42.5 183MInput+Output, co=1M 64 x 39.9 118MInput+Output, co=2M 64 x x 45.0 183MInput+Output, co=1M 64 x x 43.4 118MInput 32 40.5 690MInput 32 x 42.4 701MInput 32 x x 42.9 701MInput+Output 32 x 42.3 435MInput+Output 32 x x 42.8 435MInput+Output, co=2M 32 x 35.0 122MInput+Output, co=1M 32 x 32.6 88MInput+Output, co=2M 32 x x 43.3 122MInput+Output, co=1M 32 x x 41.6 88MTable 9: FlickrTag: Comparison for a large dataset of (i) different quantization methods and param-eters, (ii) with or without re-training.13
HkMNiGbNe
Bkbc-Vqeg
ICLR.cc/2017/conference/-/paper208/official/review
{"title": "Review: Learning Word-Like Units from Joint Audio-Visual Analysis", "rating": "5: Marginally below acceptance threshold", "review": "CONTRIBUTIONS \nThis paper introduces a method for learning semantic \"word-like\" units jointly from audio and visual data. The authors use a multimodal neural network architecture which accepts both image and audio (as spectrograms) inputs. Joint training allows one to embed both image and spoken language captions into a shared representation space. Audio-visual groundings are generated by measuring affinity between image patches and audio clips. This allows the model to relate specific visual regions to specific audio segments. Experiments cover image search (audio to image) and annotation (image to audio) tasks and acoustic word discovery.\n\n\nNOVELTY+SIGNIFICANCE\nAs correctly mentioned in Section 1.2, the computer vision and natural language communities have studied multimodal learning for use in image captioning and retrieval. With regards to multimodal learning, this paper offers incremental advancements since it primarily uses a novel combination of input modalities (audio and images).\n\nHowever, bidirectional image/audio retrieval has already been explored by the authors in prior work (Harwath et al, NIPS 2016). Apart from minor differences in data and CNN architecture, the training procedure in this submission is identical to this prior work. The novelty in this submission is therefore the procedure for using the trained model for associating image regions with audio subsequences.\n\nThe methods employed for this association are relatively straightforward combination of standard techniques with limited novelty. The trained model is used to compute alignment scores between densely sampled image regions and audio subsequences; from these alignment scores a number of heuristics are applied to associate clusters of image regions with clusters of audio subsequences.\n\n\nMISSING CITATION\nThere is a lot of work in this area spanning computer vision, natural language, and speech recognition. One key missing reference:\n\nNgiam, et al. \"Multimodal deep learning.\" ICML 2011\n\n\nPOSITIVE POINTS\n- Using more data and an improved CNN architecture, this paper improves on prior work for bidirectional image/audio retrieval\n- The presented method performs efficient acoustic pattern discovery\n- The audio-visual grounding combined with the image and acoustic cluster analysis is successful at discovering audio-visual cluster pairs\n\nNEGATIVE POINTS\n- Limited novelty, especially compared with Harwath et al, NIPS 2016\n- Although it gives good results, the clustering method has limited novelty and feels heuristic\n- The proposed method includes many hyperparameters (patch size, acoustic duration, VAD threshold, IoU threshold, number of k-means clusters, etc) and there is no discussion of how these were set or the sensitivity of the method to these choices\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Word-Like Units from Joint Audio-Visual Analylsis
["David Harwath", "James R. Glass"]
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse'' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
["Speech", "Computer vision", "Deep learning", "Multi-modal learning", "Unsupervised Learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=Bkbc-Vqeg
https://openreview.net/pdf?id=Bkbc-Vqeg
https://openreview.net/forum?id=Bkbc-Vqeg&noteId=HkMNiGbNe
Under review as a conference paper at ICLR 2017LEARNING WORD -LIKE UNITS FROM JOINT AUDIO -VISUAL ANALYSISDavid Harwath and James R. GlassComputer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge, MA 02139, USAfdharwath,glass g@mit.eduABSTRACTGiven a collection of images and spoken audio captions, we present a method fordiscovering word-like acoustic units in the continuous speech signal and ground-ing them to semantically relevant image regions. For example, our model is ableto detect spoken instances of the words “lighthouse” within an utterance and as-sociate them with image regions containing lighthouses. We do not use any formof conventional automatic speech recognition, nor do we use any text transcrip-tions or conventional linguistic annotations. Our model effectively implements aform of spoken language acquisition, in which the computer learns not only torecognize word categories by sound, but also to enrich the words it learns withsemantics by grounding them in images.1 I NTRODUCTION1.1 P ROBLEM STATEMENT AND MOTIVATIONAutomatically discovering words and other elements of linguistic structure from continuous speechhas been a longstanding goal in computational linguists, cognitive science, and other speech pro-cessing fields. Practically all humans acquire language at a very early age, but this task has provento be an incredibly difficult problem for computers. While conventional automatic speech recogni-tion (ASR) systems have a long history and have recently made great strides thanks to the revival ofdeep neural networks (DNNs), their reliance on highly supervised training paradigms has essentiallyrestricted their application to the major languages of the world, accounting for a small fraction of themore than 7,000 human languages spoken worldwide (Lewis et al., 2016). The main reason for thislimitation is the fact that these supervised approaches require enormous amounts of very expensivehuman transcripts. Moreover, the use of the written word is a convenient but limiting convention,since there are many oral languages which do not even employ a writing system. In constrast, in-fants learn to communicate verbally before they are capable of reading and writing - so there is noinherent reason why spoken language systems need to be inseparably tied to text.The key contribution of this paper has two facets. First, we introduce a methodology capable of notonly discovering word-like units from continuous speech at the waveform level with no additionaltext transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the se-mantics of those units via visual associations. Although we evaluate our algorithm on an Englishcorpus, it could conceivably run on any language without requiring any text or associated ASR ca-pability. Second, from a computational perspective, our method of speech pattern discovery runs inlinear time. Previous work has presented algorithms for performing acoustic pattern discovery incontinuous speech (Park & Glass, 2008; Jansen et al., 2010; Jansen & Van Durme, 2011) withoutthe use of transcriptions or another modality, but those algorithms are limited in their ability to scaleby their inherent O(n2)complexity, since they do an exhaustive comparison of the data against it-self. Our method leverages correlated information from a second modality - the visual domain - toguide the discovery of words and phrases. This enables our method to run in O(n)time, and wedemonstrate it scalability by discovering acoustic patterns in over 522 hours of audio data.1Under review as a conference paper at ICLR 20171.2 P REVIOUS WORKA sub-field within speech processing that has garnered much attention recently is unsupervisedspeech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by Park &Glass (2008), which discovers repetitions of the same words and phrases in a collection of untran-scribed acoustic data. Many subsequent efforts extended these ideas(Jansen et al., 2010; Jansen &Van Durme, 2011; Dredze et al., 2010; Harwath et al., 2012; Zhang & Glass, 2009). Alternativeapproaches based on Bayesian nonparametric modeling (Lee & Glass, 2012; Ondel et al., 2016)employed a generative model to cluster acoustic segments into phoneme-like categories, and relatedworks aimed to segment and cluster either reference or learned phoneme-like tokens into word-likeand higher-level units (Johnson, 2008; Goldwater et al., 2009; Lee et al., 2015).In parallel, the computer vision and NLP communities have begun to leverage deep learning tocreate multimodal models of images and text. Many works have focused on generating annotationsor text captions for images (Socher & Li, 2010; Frome et al., 2013; Socher et al., 2014; Karpathyet al., 2014; Karpathy & Li, 2015; Vinyals et al., 2015; Fang et al., 2015; Johnson et al., 2016). Oneinteresting intersection between word induction from phoneme strings and multimodal modeling ofimages and text is that of Gelderloos & Chrupaa (2016), who uses images to segment words withincaptions at the phoneme string level. Several recent papers have taken these ideas beyond text,and attempted to relate images to spoken audio captions directly at the waveform level (Harwath &Glass, 2015; Harwath et al., 2016).While supervised object detection is a standard problem in the vision community, several recentworks have tackled the problem of weakly-supervised or unsupervised object localization (Bergamoet al., 2014; Cho et al., 2015; Zhou et al., 2015; Cinbis et al., 2016). Although the focus of thiswork is discovering acoustic patterns, in the process we jointly associate the acoustic patterns withclusters of image crops, which we demonstrate capture visual patterns as well.2 E XPERIMENTAL DATAWe employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset(Zhou et al., 2014), corresponding to over 522 hours of speech data. The captions were collected us-ing Amazon’s Mechanical Turk service, in which workers were shown images and asked to describethem verbally in a free-form manner. Our data collection scheme is described in detail in Harwathet al. (2016), but the experiments in this paper leverage nearly twice the amount of data. For trainingour multimodal neural network as well as the pattern discovery experiments, we use a subset of214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the performanceof the multimodal network’s retrieval ability. Because we lack ground truth text transcripts for thedata, we used Google’s Speech Recognition public API to generate proxy transcripts which we usewhen analyzing our system. Note that the ASR was only used for analysis of the results, and wasnot involved in any of the learning.3 A UDIO -VISUAL EMBEDDING NEURAL NETWORKSWe first train a deep multimodal embedding network similar in spirit to the one described in Har-wath et al. (2016), but with a more sophisticated architecture. The model is trained to map entireimage frames and entire spoken captions into a shared embedding space; however, as we will show,the trained network can then be used to localize patterns corresponding to words and phrases withinthe spectrogram, as well as visual objects within the image by applying it to small sub-regions ofthe image and spectrogram. The model is comprised of two branches, one which takes as input im-ages, and the other which takes as input spectrograms. The image network is formed by taking theoff-the-shelf VGG 16 layer network (Simonyan & Zisserman, 2014) and replacing the softmax clas-sification layer with a linear transform which maps the 4096-dimensional activations of the secondfully connected layer into our 1024-dimensional multimodal embedding space. In our experiments,the weights of this projection layer are trained, but the layers taken from the VGG network belowit are kept fixed. The second branch of our network analyzes speech spectrograms as if they wereblack and white images. Our spectrograms are computed using 40 log Mel filterbanks with a 25msHamming window and a 10ms shift. Therefore, the input to this branch always has 1 color channel2Under review as a conference paper at ICLR 2017and is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spec-trogram varies depending upon the duration of the spoken caption, with each pixel corresponding toapproximately 10 milliseconds worth of audio. The specific network architecture we use is shownbelow, where C denotes the number of convolutional channels, W is filter width, H is filter height,and S is pooling stride.1. Convolution with C=128, W=1, H=40, ReLU2. Convolution with C=256, W=11, H=1, ReLU, maxpool with W=3, H=1, S=23. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=24. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=25. Convolution with C=1024, W=17, H=1, ReLU6. Meanpool over entire caption width followed by L2 normalizationIn practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e.,10sec of speech) by applying truncation or zero padding; this introduces computational savings andwas shown in Harwath et al. (2016) to only slightly degrade the performance. Additionally, both theimages and spectrograms are mean normalized before training. The overall multimodal network isformed by tying together the image and audio branches with a layer which takes both of their outputvectors and computes an inner product between them, representing the similarity score between agiven image/caption pair. We train the network to assign high scores to matching image/captionpairs, and lower scores to mismatched pairs. The objective function and training procedure we useis identical to that described in Harwath et al. (2016), but we briefly describe it here.Within a minibatch of Bimage/caption pairs, let Spj,j= 1;:::;B denote the similarity score ofthejthimage/caption pair as output by the neural network. Next, for each pair we randomly sampleone impostor caption and one impostor image from the same minibatch. Let Sijdenote the similarityscore between the jthcaption and its impostor image, and Scjbe the similarity score between thejthimage and its impostor caption. The total loss for the entire minibatch is then computed asL() =BXj=1max(0;ScjSpj+ 1) + max(0 ;SijSpj+ 1): (1)We train the neural network with 50 epochs of stochastic gradient descent using a batch size B=128, a momentum of 0.9, and a learning rate of 1e-5 which is set to geometrically decay by a factorbetween 2 and 5 every 5 to 10 epochs.4 F INDING AND CLUSTERING AUDIO -VISUAL CAPTION GROUNDINGSAlthough we have trained our multimodal network to compute embeddings at the granularity ofentire images and entire caption spectrograms, we can easily apply it in a more localized fashion.In the case of images, we can simply take any arbitrary crop of an original image and resize itto 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirelyconvolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector nomatter the extent of the input. The bigger question is where to locally apply the networks in order todiscover meaningful acoustic and visual patterns.Given an image and its corresponding spoken audio caption, we use the term grounding to referto extracting meaningful segments from the caption and associating them with an appropriate sub-region of the image. For example, if an image depicted a person eating ice cream and its captioncontained the spoken words “A person is enjoying some ice cream,” an ideal set of groundings wouldentail the acoustic segment containing the word “person” linked to a bounding box around the per-son, and the segment containing the word “ice cream” linked to a box around the ice cream. We usea constrained brute force ranking scheme to evaluate all possible groundings (with a restricted gran-ularity) between an image and its caption. Specifically, we divide the image into a grid, and extractall of the image crops whose boundaries sit on the grid lines. Because we are mainly interested inextracting regions of interest and not high precision object detection boxes, to keep the number ofproposal regions under control we impose several restrictions. First, we use a 10x10 grid on eachimage regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:33Under review as a conference paper at ICLR 2017and 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes.Third, we define a minimum bounding width as 30% of the original image width, and similarly aminimum height as 30% of the original image height. In practice, this results in a few thousandproposal regions per image.To extract proposal segments from the audio caption spectrogram, we similarly define a 1-dim gridalong the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. Weimpose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implyingthat our discovered acoustic patterns are restricted to fall between 0.5 and 1 second in duration. Thenumber of proposal segments will vary depending on the caption length, and typically number in theseveral thousands. Note that when learning groundings we consider the entire audio sequence, anddo not incorporate the 10sec duration constraint imposed during the first stage of learning.Figure 1: An example of our grounding method. The left image displays a grid defining the allowedstart and end coordinates for the bounding box proposals. The bottom spectrogram displays severalaudio region proposals drawn as the families of stacked red line segments. The image on the rightand spectrogram on the top display the final output of the grounding algorithm. The top spectrogramalso displays the time-aligned text transcript of the caption, so as to demonstrate which words werecaptured by the groundings. In this example, the top 3 groundings have been kept, with the colorsindicating the audio segment which is grounded to each bounding box.Once we have extracted a set of proposed visual bounding boxes and acoustic segments for a givenimage/caption pair, we use our multimodal network to compute a similarity score between eachunique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, andsimilarity score constitutes a proposed grounding. A naive approach would be to simply keep thetopNgroundings from this list, but in practice we ran into two problems with this strategy. First,many proposed acoustic segments capture mostly silence due to pauses present in natural speech.We solve this issue by using a simple voice activity detector (V AD) which was trained on the TIMITcorpus(Garofolo et al., 1993). If the V AD estimates that 40% or more of any proposed acousticsegment is silence, we discard that entire grounding. The second problem we ran into is the factthat the top of the sorted grounding list is dominated by highly overlapping acoustic segments. Thismakes sense, because highly informative content words will show up in many different groundingswith slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding fromthe top of the proposal list we compare the interval intersection over union (IOU) of its acousticsegment against all acoustic segments already accepted for further consideration. If the IOU exceedsa threshold of 0.1, we discard the new grounding and continue moving down the list. We stopaccumulating groundings once the scores fall to below 50% of the top score in the “keep” list, orwhen 10 groundings have been added to the “keep” list, whichever comes first. Figure 1 displays apictorial example of our grounding procedure.4Under review as a conference paper at ICLR 2017Once we have completed the grounding procedure, we are left with a small set of regions of interestin each image and caption spectrogram. We use the respective branches of our multimodal networkto compute embedding vectors for each grounding’s image crop and acoustic segment. We thenemployk-means clustering separately on the collection of image embedding vectors as well as thecollection of acoustic embedding vectors. The last step is to establish an affinity score between eachimage clusterIand each acoustic cluster A; we do so using the equationAffinity (I;A) =Xi2IXa2Ai>aPair(i;a) (2)where iis an image crop embedding vector, ais an acoustic segment embedding vector, andPair(i;a)is equal to 1 when iandabelong to the same grounding pair, and 0 otherwise. Afterclustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a setof linkages describing which acoustic clusters are associated with which image clusters. In the nextsection, we investigate the properties of these clusters in more detail.5 E XPERIMENTS AND ANALYSISWe trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with animage search (given caption, find image) and annotation (given image, find caption) task similar tothe one used in Harwath et al. (2016); Karpathy et al. (2014); Karpathy & Li (2015). The imageannotation and search recall scores on a 1,000 image/caption pair held-out test set are shown inTable 1, and are compared against the model architecture used in Harwath et al. (2016). We thenperformed the grounding and pattern clustering steps on the entire training dataset. This resulted ina total of 1,161,305 unique grounding pairs.In order to evaluate the acoustic pattern discovery and clustering, we wish to assign a label to eachcluster and cluster member, but this is not completely straightforward since each acoustic segmentmay capture part of a word, a whole word, multiple words, etc. Our strategy is to force-align theGoogle recognition hypothesis text to the audio, and then assign a label string to each acousticsegment based upon which words it overlaps in time. The alignments are created with the help of aKaldi (Povey et al., 2011) speech recognizer based on the standard WSJ recipe and trained using theGoogle ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped30% or more by the acoustic segment is included in the label string for the segment. We thenemploy a majority vote scheme to derive the overall cluster labels. When computing the purity of acluster, we count a cluster member as matching the cluster label as long as the overall cluster labelappears in the member’s label string. In other words, an acoustic segment overlapping the words “thelighthouse” would receive credit for matching the overall cluster label “lighthouse”. Several exampleclusters and a breakdown of the labels of their members are shown in Table 2. We investigated somesimple schemes for predicting highly pure clusters, and found that the empirical variance of thecluster members (average squared distance to the cluster centroid) was a good indicator. Figure 2displays a scatter plot of cluster purity weighted by the natural log of the cluster size against theempirical variance. Large, pure clusters are easily predicted by their low empirical variance, whilea high empirical variance is indicative of a garbage cluster.Ranking a set of k= 500 acoustic clusters by their variance, Table 3 displays some statistics for the50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and theirlabels reflect interesting object categories being identified by the neural network. We additionallycompute the coverage of each cluster by counting the total number of instances of the cluster labelanywhere in the training data, and then compute what fraction of those instances were capturedby the cluster. We notice many examples of high coverage clusters, e.g. the “skyscraper” clustercaptures 84% of all occurrences of the word “skyscraper” anywhere in the training data, while the“baseball” cluster captures 86% of all occurrences of the word “baseball”. This is quite impressivegiven the fact that no conventional speech recognition was employed, and neither the multimodalneural network nor the grounding algorithm had access to the text transcripts of the captions.To get an idea of the impact of the kparameter as well as a variance-based cluster pruning thresholdbased on Figure 2, we swept kfrom 250 to 2000 and computed a set of statistics shown in Table4. We compute the standard overall cluster purity evaluation metric in addition to the average cov-erage across clusters. The table shows the natural tradeoff between cluster purity and redundancy5Under review as a conference paper at ICLR 2017(indicated by the average cluster coverage) as kis increased. In all cases, the variance-based clus-ter pruning greatly increases both the overall purity and average cluster coverage metrics. We alsonotice that more unique cluster labels are discovered with a larger k.Next, we examine the image clusters. Figure 3 displays the 9 most central image crops for a setof 10 different image clusters, along with the majority-vote label of each image cluster’s associatedaudio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label.We include many more example image clusters in Appendix A.Finally, we wish to examine the semantic embedding space in more depth. We took the top 150clusters from the same k= 500 clustering run described in Table 3 and performed t-SNE (van derMaaten & Hinton, 2008) analysis on the cluster centroid vectors. We projected each centroid downto 2 dimensions and plotted their majority-vote labels in Figure 4. Immediately we see that differentclusters which capture the same label closely neighbor one another, indicating that distances in theembedding space do indeed carry information discriminative across word types (and suggesting thata more sophisticated clustering algorithm than k-means would perform better). More interestingly,we see that semantic information is also reflected in these distances. The cluster centroids for “lake,”“river,” “body,” “water,” “waterfall,” “pond,” and “pool” all form a tight meta-cluster, as do “restau-rant,” “store,” “shop,” and “shelves,” as well as “children,” “girl,” “woman,” and “man.” Many othersemantic meta-clusters can be seen in Figure 4, suggesting that the embedding space is capturinginformation that is highly discriminative both acoustically andsemantically.Table 1: Results for image search and annotation on the Places audio caption data (214k trainingpairs, 1k testing pairs). Recall is shown for the top 1, 5, and 10 hits. The model we use in thispaper is compared against the meanpool variant of the model architecture presented in Harwathet al. (2016). For both training and testing, the captions were truncated/zero-padded to 10 seconds.Search AnnotationModel R@1 R@5 R@10 R@1 R@5 R@10(Harwath et al., 2016) 0.090 0.261 0.372 0.098 0.266 0.352This work 0.112 0.312 0.431 0.120 0.307 0.438Figure 2: Scatter plot of audio cluster purityweighted by log cluster size against clustervariance for k= 500 (least-squares line su-perimposed).Word Count Word Countocean 2150 castle 766(silence) 127 (silence) 70the ocean 72 capital 39blue ocean 29 large castle 24body ocean 22 castles 23oceans 16 (noise) 21ocean water 16 council 13(noise) 15 stone castle 12of ocean 14 capitol 10oceanside 14 old castle 10Table 2: Examples of the breakdown ofword/phrase identities of several acoustic clusters6 C ONCLUSIONS AND FUTURE WORKIn this paper, we have demonstrated that a neural network trained to associate images with the wave-forms representing their spoken audio captions can successfully be applied to discover and clusteracoustic patterns representing words or short phrases in untranscribed audio data. An analogousprocedure can be applied to visual images to discover visual patterns, and then the two modali-6Under review as a conference paper at ICLR 2017sky grass sunset ocean rivercastle couch wooden lighthouse trainFigure 3: The 9 most central image crops from several image clusters, along with the majority-votelabel of their most associated acoustic pattern clusterTable 3: Top 50 clusters with k= 500 sorted by increasing variance. Legend: jCcjis acousticcluster size,jCijis associated image cluster size, Pur. is acoustic cluster purity, 2is acousticcluster variance, and Cov. is acoustic cluster coverage. A dash (-) indicates a cluster whose majoritylabel is silence.Trans jCcj jCij Pur.2Cov. Trans jCcj jCij Pur.2Cov.- 1059 3480 0.70 0.26 - snow 4331 3480 0.85 0.26 0.45desert 1936 2896 0.82 0.27 0.67 kitchen 3200 2990 0.88 0.28 0.76restaurant 1921 2536 0.89 0.29 0.71 mountain 4571 2768 0.86 0.30 0.38black 4369 2387 0.64 0.30 0.17 skyscraper 843 3205 0.84 0.30 0.84bridge 1654 2025 0.84 0.30 0.25 tree 5303 3758 0.90 0.30 0.16castle 1298 2887 0.72 0.31 0.74 bridge 2779 2025 0.81 0.32 0.41- 2349 2165 0.31 0.33 - ocean 2913 3505 0.87 0.33 0.71table 3765 2165 0.94 0.33 0.23 windmill 1458 3752 0.71 0.33 0.76window 1890 2795 0.85 0.34 0.21 river 2643 3204 0.76 0.35 0.62water 5868 3204 0.90 0.35 0.27 beach 1897 2964 0.79 0.35 0.64flower 3906 2587 0.92 0.35 0.67 wall 3158 3636 0.84 0.35 0.23sky 4306 6055 0.76 0.36 0.34 street 2602 2385 0.86 0.36 0.49golf course 1678 3864 0.44 0.36 0.63 field 3896 3261 0.74 0.36 0.37tree 4098 3758 0.89 0.36 0.13 lighthouse 1254 1518 0.61 0.36 0.83forest 1752 3431 0.80 0.37 0.56 church 2503 3140 0.86 0.37 0.72people 3624 2275 0.91 0.37 0.14 baseball 2777 1929 0.66 0.37 0.86field 2603 3922 0.74 0.37 0.25 car 3442 2118 0.79 0.38 0.27people 4074 2286 0.92 0.38 0.17 shower 1271 2206 0.74 0.38 0.82people walking 918 2224 0.63 0.38 0.25 wooden 3095 2723 0.63 0.38 0.28mountain 3464 3239 0.88 0.38 0.29 tree 3676 2393 0.89 0.39 0.11- 1976 3158 0.28 0.39 - snow 2521 3480 0.79 0.39 0.24water 3102 2948 0.90 0.39 0.14 rock 2897 2967 0.76 0.39 0.26- 2918 3459 0.08 0.39 - night 3027 3185 0.44 0.39 0.59station 2063 2083 0.85 0.39 0.62 chair 2589 2288 0.89 0.39 0.22building 6791 3450 0.89 0.40 0.21 city 2951 3190 0.67 0.40 0.50ties can be linked, allowing the network to learn e.g. that spoken instances of the word “train” areassociated with image regions containing trains. This is done without the use of a conventional au-tomatic speech recognition system and zero text transcriptions, and therefore is completely agnosticto the language in which the captions are spoken. Further, this is done in O(n)time with respectto the number of image/caption pairs, whereas previous state-of-the-art acoustic pattern discoveryalgorithms which leveraged acoustic data alone run in O(n2)time. We demonstrate the success ofour methodology on a large-scale dataset of over 214,000 image/caption pairs, comprising over 522hours of spoken audio data. We have shown that the shared multimodal embedding space learnedby our model is discriminative not only across visual object categories, but also acoustically andse-mantically across spoken words. To the best of our knowledge, this paper contains by far the largestscale speech pattern discovery experiment ever performed, as well as the first ever successful effort7Under review as a conference paper at ICLR 2017Table 4: Clustering statistics of the acoustic clusters for various values of kand different settingsof the variance-based cluster pruning threshold. Legend: jCj= number of clusters remaining afterpruning,jXj= number of datapoints after pruning, Pur = purity, jLj= number of unique clusterlabels, AC = average cluster coverage2<0:9 2<0:65kjCj jXj PurjLj ACjCj jXj PurjLj AC250 249 1081514 .364 149 .423 128 548866 .575 108 .463500 499 1097225 .396 242 .332 278 623159 .591 196 .375750 749 1101151 .409 308 .406 434 668771 .585 255 .4501000 999 1103391 .411 373 .336 622 710081 .568 318 .3821500 1496 1104631 .429 464 .316 971 750162 .566 413 .3662000 1992 1106418 .431 540 .237 1354 790492 .546 484 .271Figure 4: t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k= 500 .Displayed is the majority-vote transcription of the each audio cluster. All clusters shown containeda minimum of 583 members and an average of 2482, with an average purity of .668.to learn the semantics of the discovered acoustic patterns by grounding them to patterns which arejointly discovered in another modality (images).The future directions in which this research could be taken are incredibly fertile. Because our methodcreates a segmentation as well as an alignment between images and their spoken captions, a genera-tive model could be trained using these alignments. The model could provide a spoken caption for anarbitrary image, or even synthesize an image given a spoken description. Modeling improvementsare also possible, aimed at the goal of incorporating both visual and acoustic localization into theneural network itself. Additionally, by collecting a second dataset of captions for our images in a dif-ferent language, such as Spanish, our model could be extended to learn the acoustic correspondencesfor a given object category in both languages. This paves the way for creating a speech-to-speechtranslation model not only with absolutely zero need for any sort of text transcriptions, but also withzero need for directly parallel linguistic data or manual human translations.REFERENCESAlessandro Bergamo, Loris Bazzani, Dragomir Anguelov, and Lorenzo Torresani. Self-taught object localiza-tion with deep networks. CoRR , abs/1409.3964, 2014. URL http://arxiv.org/abs/1409.3964 .Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization inthe wild: Part-based matching with bottom-up region proposals. In Proceedings of CVPR , 2015.8Under review as a conference paper at ICLR 2017Ramazan Cinbis, Jakob Verbeek, and Cordelia Schmid. Weakly supervised object localization with multi-foldmultiple instance learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence , 2016.Mark Dredze, Aren Jansen, Glen Coppersmith, and Kenneth Church. NLP on spoken documents without ASR.InProceedings of EMNLP , 2010.Hao Fang, Saurabh Gupta, Forrest Iandola, Srivastava Rupesh, Li Deng, Piotr Dollar, Jianfeng Gao, XiaodongHe, Margaret Mitchell, Platt John C., C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visualconcepts and back. In Proceedings of CVPR , 2015.Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, andTomas Mikolov. Devise: A deep visual-semantic embedding model. In Proceedings of the Neural Informa-tion Processing Society , 2013.John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallet, Nancy Dahlgren, and Victor Zue.The TIMIT acoustic-phonetic continuous speech corpus, 1993.Lieke Gelderloos and Grzegorz Chrupaa. From phonemes to images: levels of representation in a recurrentneural model of visually-grounded language learning. In arXiv:1610.03342 , 2016.Sharon Goldwater, Thomas Griffiths, and Mark Johnson. A Bayesian framework for word segmentation: ex-ploring the effects of context. In Cognition, vol. 112 pp.21-54 , 2009.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. In Proceed-ings of the IEEE Workshop on Automatic Speech Recognition and Understanding , 2015.David Harwath, Timothy J. Hazen, and James Glass. Zero resource spoken audio corpus analysis. In Proceed-ings of ICASSP , 2012.David Harwath, Antonio Torralba, and James R. Glass. Unsupervised learning of spoken language with visualcontext. In Proceedings of NIPS , 2016.Aren Jansen and Benjamin Van Durme. Efficient spoken term discovery using randomized algorithms. InProceedings of IEEE Workshop on Automatic Speech Recognition and Understanding , 2011.Aren Jansen, Kenneth Church, and Hynek Hermansky. Toward spoken term discovery at scale with zeroresources. In Proceedings of Interspeech , 2010.Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks fordense captioning. In Proceedings of CVPR , 2016.Mark Johnson. Unsupervised word segmentation for sesotho using adaptor grammars. In Proceedings of ACLSIG on Computational Morphology and Phonology , 2008.Andrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descriptions. InProceedings of CVPR , 2015.Andrej Karpathy, Armand Joulin, and Fei-Fei Li. Deep fragment embeddings for bidirectional image sentencemapping. In Proceedings of the Neural Information Processing Society , 2014.Chia-Ying Lee and James Glass. A nonparametric Bayesian approach to acoustic model discovery. In Proceed-ings of the 2012 meeting of the Association for Computational Linguistics , 2012.Chia-Ying Lee, Timothy J. O’Donnell, and James Glass. Unsupervised lexicon discovery from acoustic input.InTransactions of the Association for Computational Linguistics , 2015.M. Paul Lewis, Gary F. Simon, and Charles D. Fennig. Ethnologue: Languages of the World, Nineteenthedition . SIL International. Online version: http://www.ethnologue.com, 2016.Lucas Ondel, Lukas Burget, and Jan Cernocky. Variational inference for acoustic unit discovery. In 5th Work-shop on Spoken Language Technology for Under-resourced Language , 2016.Alex Park and James Glass. Unsupervised pattern discovery in speech. In IEEE Transactions on Audio, Speech,and Language Processing vol. 16, no.1, pp. 186-197 , 2008.Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han-nemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. TheKaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Under-standing , 2011.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.CoRR , abs/1409.1556, 2014.Richard Socher and Fei-Fei Li. Connecting modalities: Semi-supervised segmentation and annotation of im-ages using unaligned text corpora. In Proceedings of CVPR , 2010.Richard Socher, Andrej Karpathy, Quoc V . Le, Christopher D. Manning, and Andrew Y . Ng. Grounded com-positional semantics for finding and describing images with sentences. In Transactions of the Associationfor Computational Linguistics , 2014.Laurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-sne. In Journal ofMachine Learning Research , 2008.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dimitru Erhan. Show and tell: A neural image captiongenerator. In Proceedings of CVPR , 2015.Yaodong Zhang and James Glass. Unsupervised spoken keyword spotting via segmental DTW on Gaussianposteriorgrams. In Proceedings ASRU , 2009.Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features forscene recognition using places database. In Proceedings of the Neural Information Processing Society , 2014.Boloi Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge indeep scene CNNs. In Proceedings of ICLR , 2015.9Under review as a conference paper at ICLR 2017A A PPENDIX : ADDITIONAL VISUALIZATIONS OF IMAGE PATTERNCLUSTERSbeach cliff pool desert fieldchair table staircase statue stonechurch forest mountain skyscraper treeswaterfall windmills window city bridgeflowers man wall archway baseballboat shelves cockpit girl childrenbuilding rock kitchen plant hallway10
ByhTZ0rNx
Bkbc-Vqeg
ICLR.cc/2017/conference/-/paper208/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "This paper is a follow-up on the NIPS 2016 paper \"Unsupervised learning of spoken language with visual context\", and does exactly what that paper proposes in its future work section: \"to perform acoustic segmentation and clustering, effectively learning a lexicon of word-like units\" using the embeddings that their system learns. The analysis is very interesting and I really like where the authors are going with this.\n\nMy main concern is novelty. It feels like this work is a rather trivial follow-up on an existing model, which is fine, but then the analysis should be more satisfying: currently, it feels like the authors are just illustrating some of the things that the NIPS model (with some minor improvements) learns. For a more interesting analysis, I would have liked things like a comparison of different segmentation approaches (both in audio and in images), i.e., suppose we have access to the perfect segmentation in both modalities, what happens? It would also be interesting to look at what is learned with the grounded representation, and evaluate e.g. on multi-modal semantics tasks.\n\nApart from that, the paper is well written and I really like this research direction. It is very important to analyze what models learn, and this is a good example of the types of questions one should ask. I am afraid, however, that the model is not novel enough, nor the questions deep enough, to make this paper better than borderline for ICLR.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Word-Like Units from Joint Audio-Visual Analylsis
["David Harwath", "James R. Glass"]
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse'' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
["Speech", "Computer vision", "Deep learning", "Multi-modal learning", "Unsupervised Learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=Bkbc-Vqeg
https://openreview.net/pdf?id=Bkbc-Vqeg
https://openreview.net/forum?id=Bkbc-Vqeg&noteId=ByhTZ0rNx
Under review as a conference paper at ICLR 2017LEARNING WORD -LIKE UNITS FROM JOINT AUDIO -VISUAL ANALYSISDavid Harwath and James R. GlassComputer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge, MA 02139, USAfdharwath,glass g@mit.eduABSTRACTGiven a collection of images and spoken audio captions, we present a method fordiscovering word-like acoustic units in the continuous speech signal and ground-ing them to semantically relevant image regions. For example, our model is ableto detect spoken instances of the words “lighthouse” within an utterance and as-sociate them with image regions containing lighthouses. We do not use any formof conventional automatic speech recognition, nor do we use any text transcrip-tions or conventional linguistic annotations. Our model effectively implements aform of spoken language acquisition, in which the computer learns not only torecognize word categories by sound, but also to enrich the words it learns withsemantics by grounding them in images.1 I NTRODUCTION1.1 P ROBLEM STATEMENT AND MOTIVATIONAutomatically discovering words and other elements of linguistic structure from continuous speechhas been a longstanding goal in computational linguists, cognitive science, and other speech pro-cessing fields. Practically all humans acquire language at a very early age, but this task has provento be an incredibly difficult problem for computers. While conventional automatic speech recogni-tion (ASR) systems have a long history and have recently made great strides thanks to the revival ofdeep neural networks (DNNs), their reliance on highly supervised training paradigms has essentiallyrestricted their application to the major languages of the world, accounting for a small fraction of themore than 7,000 human languages spoken worldwide (Lewis et al., 2016). The main reason for thislimitation is the fact that these supervised approaches require enormous amounts of very expensivehuman transcripts. Moreover, the use of the written word is a convenient but limiting convention,since there are many oral languages which do not even employ a writing system. In constrast, in-fants learn to communicate verbally before they are capable of reading and writing - so there is noinherent reason why spoken language systems need to be inseparably tied to text.The key contribution of this paper has two facets. First, we introduce a methodology capable of notonly discovering word-like units from continuous speech at the waveform level with no additionaltext transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the se-mantics of those units via visual associations. Although we evaluate our algorithm on an Englishcorpus, it could conceivably run on any language without requiring any text or associated ASR ca-pability. Second, from a computational perspective, our method of speech pattern discovery runs inlinear time. Previous work has presented algorithms for performing acoustic pattern discovery incontinuous speech (Park & Glass, 2008; Jansen et al., 2010; Jansen & Van Durme, 2011) withoutthe use of transcriptions or another modality, but those algorithms are limited in their ability to scaleby their inherent O(n2)complexity, since they do an exhaustive comparison of the data against it-self. Our method leverages correlated information from a second modality - the visual domain - toguide the discovery of words and phrases. This enables our method to run in O(n)time, and wedemonstrate it scalability by discovering acoustic patterns in over 522 hours of audio data.1Under review as a conference paper at ICLR 20171.2 P REVIOUS WORKA sub-field within speech processing that has garnered much attention recently is unsupervisedspeech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by Park &Glass (2008), which discovers repetitions of the same words and phrases in a collection of untran-scribed acoustic data. Many subsequent efforts extended these ideas(Jansen et al., 2010; Jansen &Van Durme, 2011; Dredze et al., 2010; Harwath et al., 2012; Zhang & Glass, 2009). Alternativeapproaches based on Bayesian nonparametric modeling (Lee & Glass, 2012; Ondel et al., 2016)employed a generative model to cluster acoustic segments into phoneme-like categories, and relatedworks aimed to segment and cluster either reference or learned phoneme-like tokens into word-likeand higher-level units (Johnson, 2008; Goldwater et al., 2009; Lee et al., 2015).In parallel, the computer vision and NLP communities have begun to leverage deep learning tocreate multimodal models of images and text. Many works have focused on generating annotationsor text captions for images (Socher & Li, 2010; Frome et al., 2013; Socher et al., 2014; Karpathyet al., 2014; Karpathy & Li, 2015; Vinyals et al., 2015; Fang et al., 2015; Johnson et al., 2016). Oneinteresting intersection between word induction from phoneme strings and multimodal modeling ofimages and text is that of Gelderloos & Chrupaa (2016), who uses images to segment words withincaptions at the phoneme string level. Several recent papers have taken these ideas beyond text,and attempted to relate images to spoken audio captions directly at the waveform level (Harwath &Glass, 2015; Harwath et al., 2016).While supervised object detection is a standard problem in the vision community, several recentworks have tackled the problem of weakly-supervised or unsupervised object localization (Bergamoet al., 2014; Cho et al., 2015; Zhou et al., 2015; Cinbis et al., 2016). Although the focus of thiswork is discovering acoustic patterns, in the process we jointly associate the acoustic patterns withclusters of image crops, which we demonstrate capture visual patterns as well.2 E XPERIMENTAL DATAWe employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset(Zhou et al., 2014), corresponding to over 522 hours of speech data. The captions were collected us-ing Amazon’s Mechanical Turk service, in which workers were shown images and asked to describethem verbally in a free-form manner. Our data collection scheme is described in detail in Harwathet al. (2016), but the experiments in this paper leverage nearly twice the amount of data. For trainingour multimodal neural network as well as the pattern discovery experiments, we use a subset of214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the performanceof the multimodal network’s retrieval ability. Because we lack ground truth text transcripts for thedata, we used Google’s Speech Recognition public API to generate proxy transcripts which we usewhen analyzing our system. Note that the ASR was only used for analysis of the results, and wasnot involved in any of the learning.3 A UDIO -VISUAL EMBEDDING NEURAL NETWORKSWe first train a deep multimodal embedding network similar in spirit to the one described in Har-wath et al. (2016), but with a more sophisticated architecture. The model is trained to map entireimage frames and entire spoken captions into a shared embedding space; however, as we will show,the trained network can then be used to localize patterns corresponding to words and phrases withinthe spectrogram, as well as visual objects within the image by applying it to small sub-regions ofthe image and spectrogram. The model is comprised of two branches, one which takes as input im-ages, and the other which takes as input spectrograms. The image network is formed by taking theoff-the-shelf VGG 16 layer network (Simonyan & Zisserman, 2014) and replacing the softmax clas-sification layer with a linear transform which maps the 4096-dimensional activations of the secondfully connected layer into our 1024-dimensional multimodal embedding space. In our experiments,the weights of this projection layer are trained, but the layers taken from the VGG network belowit are kept fixed. The second branch of our network analyzes speech spectrograms as if they wereblack and white images. Our spectrograms are computed using 40 log Mel filterbanks with a 25msHamming window and a 10ms shift. Therefore, the input to this branch always has 1 color channel2Under review as a conference paper at ICLR 2017and is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spec-trogram varies depending upon the duration of the spoken caption, with each pixel corresponding toapproximately 10 milliseconds worth of audio. The specific network architecture we use is shownbelow, where C denotes the number of convolutional channels, W is filter width, H is filter height,and S is pooling stride.1. Convolution with C=128, W=1, H=40, ReLU2. Convolution with C=256, W=11, H=1, ReLU, maxpool with W=3, H=1, S=23. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=24. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=25. Convolution with C=1024, W=17, H=1, ReLU6. Meanpool over entire caption width followed by L2 normalizationIn practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e.,10sec of speech) by applying truncation or zero padding; this introduces computational savings andwas shown in Harwath et al. (2016) to only slightly degrade the performance. Additionally, both theimages and spectrograms are mean normalized before training. The overall multimodal network isformed by tying together the image and audio branches with a layer which takes both of their outputvectors and computes an inner product between them, representing the similarity score between agiven image/caption pair. We train the network to assign high scores to matching image/captionpairs, and lower scores to mismatched pairs. The objective function and training procedure we useis identical to that described in Harwath et al. (2016), but we briefly describe it here.Within a minibatch of Bimage/caption pairs, let Spj,j= 1;:::;B denote the similarity score ofthejthimage/caption pair as output by the neural network. Next, for each pair we randomly sampleone impostor caption and one impostor image from the same minibatch. Let Sijdenote the similarityscore between the jthcaption and its impostor image, and Scjbe the similarity score between thejthimage and its impostor caption. The total loss for the entire minibatch is then computed asL() =BXj=1max(0;ScjSpj+ 1) + max(0 ;SijSpj+ 1): (1)We train the neural network with 50 epochs of stochastic gradient descent using a batch size B=128, a momentum of 0.9, and a learning rate of 1e-5 which is set to geometrically decay by a factorbetween 2 and 5 every 5 to 10 epochs.4 F INDING AND CLUSTERING AUDIO -VISUAL CAPTION GROUNDINGSAlthough we have trained our multimodal network to compute embeddings at the granularity ofentire images and entire caption spectrograms, we can easily apply it in a more localized fashion.In the case of images, we can simply take any arbitrary crop of an original image and resize itto 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirelyconvolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector nomatter the extent of the input. The bigger question is where to locally apply the networks in order todiscover meaningful acoustic and visual patterns.Given an image and its corresponding spoken audio caption, we use the term grounding to referto extracting meaningful segments from the caption and associating them with an appropriate sub-region of the image. For example, if an image depicted a person eating ice cream and its captioncontained the spoken words “A person is enjoying some ice cream,” an ideal set of groundings wouldentail the acoustic segment containing the word “person” linked to a bounding box around the per-son, and the segment containing the word “ice cream” linked to a box around the ice cream. We usea constrained brute force ranking scheme to evaluate all possible groundings (with a restricted gran-ularity) between an image and its caption. Specifically, we divide the image into a grid, and extractall of the image crops whose boundaries sit on the grid lines. Because we are mainly interested inextracting regions of interest and not high precision object detection boxes, to keep the number ofproposal regions under control we impose several restrictions. First, we use a 10x10 grid on eachimage regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:33Under review as a conference paper at ICLR 2017and 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes.Third, we define a minimum bounding width as 30% of the original image width, and similarly aminimum height as 30% of the original image height. In practice, this results in a few thousandproposal regions per image.To extract proposal segments from the audio caption spectrogram, we similarly define a 1-dim gridalong the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. Weimpose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implyingthat our discovered acoustic patterns are restricted to fall between 0.5 and 1 second in duration. Thenumber of proposal segments will vary depending on the caption length, and typically number in theseveral thousands. Note that when learning groundings we consider the entire audio sequence, anddo not incorporate the 10sec duration constraint imposed during the first stage of learning.Figure 1: An example of our grounding method. The left image displays a grid defining the allowedstart and end coordinates for the bounding box proposals. The bottom spectrogram displays severalaudio region proposals drawn as the families of stacked red line segments. The image on the rightand spectrogram on the top display the final output of the grounding algorithm. The top spectrogramalso displays the time-aligned text transcript of the caption, so as to demonstrate which words werecaptured by the groundings. In this example, the top 3 groundings have been kept, with the colorsindicating the audio segment which is grounded to each bounding box.Once we have extracted a set of proposed visual bounding boxes and acoustic segments for a givenimage/caption pair, we use our multimodal network to compute a similarity score between eachunique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, andsimilarity score constitutes a proposed grounding. A naive approach would be to simply keep thetopNgroundings from this list, but in practice we ran into two problems with this strategy. First,many proposed acoustic segments capture mostly silence due to pauses present in natural speech.We solve this issue by using a simple voice activity detector (V AD) which was trained on the TIMITcorpus(Garofolo et al., 1993). If the V AD estimates that 40% or more of any proposed acousticsegment is silence, we discard that entire grounding. The second problem we ran into is the factthat the top of the sorted grounding list is dominated by highly overlapping acoustic segments. Thismakes sense, because highly informative content words will show up in many different groundingswith slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding fromthe top of the proposal list we compare the interval intersection over union (IOU) of its acousticsegment against all acoustic segments already accepted for further consideration. If the IOU exceedsa threshold of 0.1, we discard the new grounding and continue moving down the list. We stopaccumulating groundings once the scores fall to below 50% of the top score in the “keep” list, orwhen 10 groundings have been added to the “keep” list, whichever comes first. Figure 1 displays apictorial example of our grounding procedure.4Under review as a conference paper at ICLR 2017Once we have completed the grounding procedure, we are left with a small set of regions of interestin each image and caption spectrogram. We use the respective branches of our multimodal networkto compute embedding vectors for each grounding’s image crop and acoustic segment. We thenemployk-means clustering separately on the collection of image embedding vectors as well as thecollection of acoustic embedding vectors. The last step is to establish an affinity score between eachimage clusterIand each acoustic cluster A; we do so using the equationAffinity (I;A) =Xi2IXa2Ai>aPair(i;a) (2)where iis an image crop embedding vector, ais an acoustic segment embedding vector, andPair(i;a)is equal to 1 when iandabelong to the same grounding pair, and 0 otherwise. Afterclustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a setof linkages describing which acoustic clusters are associated with which image clusters. In the nextsection, we investigate the properties of these clusters in more detail.5 E XPERIMENTS AND ANALYSISWe trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with animage search (given caption, find image) and annotation (given image, find caption) task similar tothe one used in Harwath et al. (2016); Karpathy et al. (2014); Karpathy & Li (2015). The imageannotation and search recall scores on a 1,000 image/caption pair held-out test set are shown inTable 1, and are compared against the model architecture used in Harwath et al. (2016). We thenperformed the grounding and pattern clustering steps on the entire training dataset. This resulted ina total of 1,161,305 unique grounding pairs.In order to evaluate the acoustic pattern discovery and clustering, we wish to assign a label to eachcluster and cluster member, but this is not completely straightforward since each acoustic segmentmay capture part of a word, a whole word, multiple words, etc. Our strategy is to force-align theGoogle recognition hypothesis text to the audio, and then assign a label string to each acousticsegment based upon which words it overlaps in time. The alignments are created with the help of aKaldi (Povey et al., 2011) speech recognizer based on the standard WSJ recipe and trained using theGoogle ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped30% or more by the acoustic segment is included in the label string for the segment. We thenemploy a majority vote scheme to derive the overall cluster labels. When computing the purity of acluster, we count a cluster member as matching the cluster label as long as the overall cluster labelappears in the member’s label string. In other words, an acoustic segment overlapping the words “thelighthouse” would receive credit for matching the overall cluster label “lighthouse”. Several exampleclusters and a breakdown of the labels of their members are shown in Table 2. We investigated somesimple schemes for predicting highly pure clusters, and found that the empirical variance of thecluster members (average squared distance to the cluster centroid) was a good indicator. Figure 2displays a scatter plot of cluster purity weighted by the natural log of the cluster size against theempirical variance. Large, pure clusters are easily predicted by their low empirical variance, whilea high empirical variance is indicative of a garbage cluster.Ranking a set of k= 500 acoustic clusters by their variance, Table 3 displays some statistics for the50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and theirlabels reflect interesting object categories being identified by the neural network. We additionallycompute the coverage of each cluster by counting the total number of instances of the cluster labelanywhere in the training data, and then compute what fraction of those instances were capturedby the cluster. We notice many examples of high coverage clusters, e.g. the “skyscraper” clustercaptures 84% of all occurrences of the word “skyscraper” anywhere in the training data, while the“baseball” cluster captures 86% of all occurrences of the word “baseball”. This is quite impressivegiven the fact that no conventional speech recognition was employed, and neither the multimodalneural network nor the grounding algorithm had access to the text transcripts of the captions.To get an idea of the impact of the kparameter as well as a variance-based cluster pruning thresholdbased on Figure 2, we swept kfrom 250 to 2000 and computed a set of statistics shown in Table4. We compute the standard overall cluster purity evaluation metric in addition to the average cov-erage across clusters. The table shows the natural tradeoff between cluster purity and redundancy5Under review as a conference paper at ICLR 2017(indicated by the average cluster coverage) as kis increased. In all cases, the variance-based clus-ter pruning greatly increases both the overall purity and average cluster coverage metrics. We alsonotice that more unique cluster labels are discovered with a larger k.Next, we examine the image clusters. Figure 3 displays the 9 most central image crops for a setof 10 different image clusters, along with the majority-vote label of each image cluster’s associatedaudio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label.We include many more example image clusters in Appendix A.Finally, we wish to examine the semantic embedding space in more depth. We took the top 150clusters from the same k= 500 clustering run described in Table 3 and performed t-SNE (van derMaaten & Hinton, 2008) analysis on the cluster centroid vectors. We projected each centroid downto 2 dimensions and plotted their majority-vote labels in Figure 4. Immediately we see that differentclusters which capture the same label closely neighbor one another, indicating that distances in theembedding space do indeed carry information discriminative across word types (and suggesting thata more sophisticated clustering algorithm than k-means would perform better). More interestingly,we see that semantic information is also reflected in these distances. The cluster centroids for “lake,”“river,” “body,” “water,” “waterfall,” “pond,” and “pool” all form a tight meta-cluster, as do “restau-rant,” “store,” “shop,” and “shelves,” as well as “children,” “girl,” “woman,” and “man.” Many othersemantic meta-clusters can be seen in Figure 4, suggesting that the embedding space is capturinginformation that is highly discriminative both acoustically andsemantically.Table 1: Results for image search and annotation on the Places audio caption data (214k trainingpairs, 1k testing pairs). Recall is shown for the top 1, 5, and 10 hits. The model we use in thispaper is compared against the meanpool variant of the model architecture presented in Harwathet al. (2016). For both training and testing, the captions were truncated/zero-padded to 10 seconds.Search AnnotationModel R@1 R@5 R@10 R@1 R@5 R@10(Harwath et al., 2016) 0.090 0.261 0.372 0.098 0.266 0.352This work 0.112 0.312 0.431 0.120 0.307 0.438Figure 2: Scatter plot of audio cluster purityweighted by log cluster size against clustervariance for k= 500 (least-squares line su-perimposed).Word Count Word Countocean 2150 castle 766(silence) 127 (silence) 70the ocean 72 capital 39blue ocean 29 large castle 24body ocean 22 castles 23oceans 16 (noise) 21ocean water 16 council 13(noise) 15 stone castle 12of ocean 14 capitol 10oceanside 14 old castle 10Table 2: Examples of the breakdown ofword/phrase identities of several acoustic clusters6 C ONCLUSIONS AND FUTURE WORKIn this paper, we have demonstrated that a neural network trained to associate images with the wave-forms representing their spoken audio captions can successfully be applied to discover and clusteracoustic patterns representing words or short phrases in untranscribed audio data. An analogousprocedure can be applied to visual images to discover visual patterns, and then the two modali-6Under review as a conference paper at ICLR 2017sky grass sunset ocean rivercastle couch wooden lighthouse trainFigure 3: The 9 most central image crops from several image clusters, along with the majority-votelabel of their most associated acoustic pattern clusterTable 3: Top 50 clusters with k= 500 sorted by increasing variance. Legend: jCcjis acousticcluster size,jCijis associated image cluster size, Pur. is acoustic cluster purity, 2is acousticcluster variance, and Cov. is acoustic cluster coverage. A dash (-) indicates a cluster whose majoritylabel is silence.Trans jCcj jCij Pur.2Cov. Trans jCcj jCij Pur.2Cov.- 1059 3480 0.70 0.26 - snow 4331 3480 0.85 0.26 0.45desert 1936 2896 0.82 0.27 0.67 kitchen 3200 2990 0.88 0.28 0.76restaurant 1921 2536 0.89 0.29 0.71 mountain 4571 2768 0.86 0.30 0.38black 4369 2387 0.64 0.30 0.17 skyscraper 843 3205 0.84 0.30 0.84bridge 1654 2025 0.84 0.30 0.25 tree 5303 3758 0.90 0.30 0.16castle 1298 2887 0.72 0.31 0.74 bridge 2779 2025 0.81 0.32 0.41- 2349 2165 0.31 0.33 - ocean 2913 3505 0.87 0.33 0.71table 3765 2165 0.94 0.33 0.23 windmill 1458 3752 0.71 0.33 0.76window 1890 2795 0.85 0.34 0.21 river 2643 3204 0.76 0.35 0.62water 5868 3204 0.90 0.35 0.27 beach 1897 2964 0.79 0.35 0.64flower 3906 2587 0.92 0.35 0.67 wall 3158 3636 0.84 0.35 0.23sky 4306 6055 0.76 0.36 0.34 street 2602 2385 0.86 0.36 0.49golf course 1678 3864 0.44 0.36 0.63 field 3896 3261 0.74 0.36 0.37tree 4098 3758 0.89 0.36 0.13 lighthouse 1254 1518 0.61 0.36 0.83forest 1752 3431 0.80 0.37 0.56 church 2503 3140 0.86 0.37 0.72people 3624 2275 0.91 0.37 0.14 baseball 2777 1929 0.66 0.37 0.86field 2603 3922 0.74 0.37 0.25 car 3442 2118 0.79 0.38 0.27people 4074 2286 0.92 0.38 0.17 shower 1271 2206 0.74 0.38 0.82people walking 918 2224 0.63 0.38 0.25 wooden 3095 2723 0.63 0.38 0.28mountain 3464 3239 0.88 0.38 0.29 tree 3676 2393 0.89 0.39 0.11- 1976 3158 0.28 0.39 - snow 2521 3480 0.79 0.39 0.24water 3102 2948 0.90 0.39 0.14 rock 2897 2967 0.76 0.39 0.26- 2918 3459 0.08 0.39 - night 3027 3185 0.44 0.39 0.59station 2063 2083 0.85 0.39 0.62 chair 2589 2288 0.89 0.39 0.22building 6791 3450 0.89 0.40 0.21 city 2951 3190 0.67 0.40 0.50ties can be linked, allowing the network to learn e.g. that spoken instances of the word “train” areassociated with image regions containing trains. This is done without the use of a conventional au-tomatic speech recognition system and zero text transcriptions, and therefore is completely agnosticto the language in which the captions are spoken. Further, this is done in O(n)time with respectto the number of image/caption pairs, whereas previous state-of-the-art acoustic pattern discoveryalgorithms which leveraged acoustic data alone run in O(n2)time. We demonstrate the success ofour methodology on a large-scale dataset of over 214,000 image/caption pairs, comprising over 522hours of spoken audio data. We have shown that the shared multimodal embedding space learnedby our model is discriminative not only across visual object categories, but also acoustically andse-mantically across spoken words. To the best of our knowledge, this paper contains by far the largestscale speech pattern discovery experiment ever performed, as well as the first ever successful effort7Under review as a conference paper at ICLR 2017Table 4: Clustering statistics of the acoustic clusters for various values of kand different settingsof the variance-based cluster pruning threshold. Legend: jCj= number of clusters remaining afterpruning,jXj= number of datapoints after pruning, Pur = purity, jLj= number of unique clusterlabels, AC = average cluster coverage2<0:9 2<0:65kjCj jXj PurjLj ACjCj jXj PurjLj AC250 249 1081514 .364 149 .423 128 548866 .575 108 .463500 499 1097225 .396 242 .332 278 623159 .591 196 .375750 749 1101151 .409 308 .406 434 668771 .585 255 .4501000 999 1103391 .411 373 .336 622 710081 .568 318 .3821500 1496 1104631 .429 464 .316 971 750162 .566 413 .3662000 1992 1106418 .431 540 .237 1354 790492 .546 484 .271Figure 4: t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k= 500 .Displayed is the majority-vote transcription of the each audio cluster. All clusters shown containeda minimum of 583 members and an average of 2482, with an average purity of .668.to learn the semantics of the discovered acoustic patterns by grounding them to patterns which arejointly discovered in another modality (images).The future directions in which this research could be taken are incredibly fertile. Because our methodcreates a segmentation as well as an alignment between images and their spoken captions, a genera-tive model could be trained using these alignments. The model could provide a spoken caption for anarbitrary image, or even synthesize an image given a spoken description. Modeling improvementsare also possible, aimed at the goal of incorporating both visual and acoustic localization into theneural network itself. Additionally, by collecting a second dataset of captions for our images in a dif-ferent language, such as Spanish, our model could be extended to learn the acoustic correspondencesfor a given object category in both languages. This paves the way for creating a speech-to-speechtranslation model not only with absolutely zero need for any sort of text transcriptions, but also withzero need for directly parallel linguistic data or manual human translations.REFERENCESAlessandro Bergamo, Loris Bazzani, Dragomir Anguelov, and Lorenzo Torresani. Self-taught object localiza-tion with deep networks. CoRR , abs/1409.3964, 2014. URL http://arxiv.org/abs/1409.3964 .Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization inthe wild: Part-based matching with bottom-up region proposals. In Proceedings of CVPR , 2015.8Under review as a conference paper at ICLR 2017Ramazan Cinbis, Jakob Verbeek, and Cordelia Schmid. Weakly supervised object localization with multi-foldmultiple instance learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence , 2016.Mark Dredze, Aren Jansen, Glen Coppersmith, and Kenneth Church. NLP on spoken documents without ASR.InProceedings of EMNLP , 2010.Hao Fang, Saurabh Gupta, Forrest Iandola, Srivastava Rupesh, Li Deng, Piotr Dollar, Jianfeng Gao, XiaodongHe, Margaret Mitchell, Platt John C., C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visualconcepts and back. In Proceedings of CVPR , 2015.Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, andTomas Mikolov. Devise: A deep visual-semantic embedding model. In Proceedings of the Neural Informa-tion Processing Society , 2013.John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallet, Nancy Dahlgren, and Victor Zue.The TIMIT acoustic-phonetic continuous speech corpus, 1993.Lieke Gelderloos and Grzegorz Chrupaa. From phonemes to images: levels of representation in a recurrentneural model of visually-grounded language learning. In arXiv:1610.03342 , 2016.Sharon Goldwater, Thomas Griffiths, and Mark Johnson. A Bayesian framework for word segmentation: ex-ploring the effects of context. In Cognition, vol. 112 pp.21-54 , 2009.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. In Proceed-ings of the IEEE Workshop on Automatic Speech Recognition and Understanding , 2015.David Harwath, Timothy J. Hazen, and James Glass. Zero resource spoken audio corpus analysis. In Proceed-ings of ICASSP , 2012.David Harwath, Antonio Torralba, and James R. Glass. Unsupervised learning of spoken language with visualcontext. In Proceedings of NIPS , 2016.Aren Jansen and Benjamin Van Durme. Efficient spoken term discovery using randomized algorithms. InProceedings of IEEE Workshop on Automatic Speech Recognition and Understanding , 2011.Aren Jansen, Kenneth Church, and Hynek Hermansky. Toward spoken term discovery at scale with zeroresources. In Proceedings of Interspeech , 2010.Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks fordense captioning. In Proceedings of CVPR , 2016.Mark Johnson. Unsupervised word segmentation for sesotho using adaptor grammars. In Proceedings of ACLSIG on Computational Morphology and Phonology , 2008.Andrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descriptions. InProceedings of CVPR , 2015.Andrej Karpathy, Armand Joulin, and Fei-Fei Li. Deep fragment embeddings for bidirectional image sentencemapping. In Proceedings of the Neural Information Processing Society , 2014.Chia-Ying Lee and James Glass. A nonparametric Bayesian approach to acoustic model discovery. In Proceed-ings of the 2012 meeting of the Association for Computational Linguistics , 2012.Chia-Ying Lee, Timothy J. O’Donnell, and James Glass. Unsupervised lexicon discovery from acoustic input.InTransactions of the Association for Computational Linguistics , 2015.M. Paul Lewis, Gary F. Simon, and Charles D. Fennig. Ethnologue: Languages of the World, Nineteenthedition . SIL International. Online version: http://www.ethnologue.com, 2016.Lucas Ondel, Lukas Burget, and Jan Cernocky. Variational inference for acoustic unit discovery. In 5th Work-shop on Spoken Language Technology for Under-resourced Language , 2016.Alex Park and James Glass. Unsupervised pattern discovery in speech. In IEEE Transactions on Audio, Speech,and Language Processing vol. 16, no.1, pp. 186-197 , 2008.Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han-nemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. TheKaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Under-standing , 2011.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.CoRR , abs/1409.1556, 2014.Richard Socher and Fei-Fei Li. Connecting modalities: Semi-supervised segmentation and annotation of im-ages using unaligned text corpora. In Proceedings of CVPR , 2010.Richard Socher, Andrej Karpathy, Quoc V . Le, Christopher D. Manning, and Andrew Y . Ng. Grounded com-positional semantics for finding and describing images with sentences. In Transactions of the Associationfor Computational Linguistics , 2014.Laurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-sne. In Journal ofMachine Learning Research , 2008.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dimitru Erhan. Show and tell: A neural image captiongenerator. In Proceedings of CVPR , 2015.Yaodong Zhang and James Glass. Unsupervised spoken keyword spotting via segmental DTW on Gaussianposteriorgrams. In Proceedings ASRU , 2009.Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features forscene recognition using places database. In Proceedings of the Neural Information Processing Society , 2014.Boloi Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge indeep scene CNNs. In Proceedings of ICLR , 2015.9Under review as a conference paper at ICLR 2017A A PPENDIX : ADDITIONAL VISUALIZATIONS OF IMAGE PATTERNCLUSTERSbeach cliff pool desert fieldchair table staircase statue stonechurch forest mountain skyscraper treeswaterfall windmills window city bridgeflowers man wall archway baseballboat shelves cockpit girl childrenbuilding rock kitchen plant hallway10
BJN_Eab4e
Bkbc-Vqeg
ICLR.cc/2017/conference/-/paper208/official/review
{"title": "Learning word-like units from joint audio-visual analysis", "rating": "5: Marginally below acceptance threshold", "review": "This work proposes a joint classification of images and audio captions for the task of word like discovery of acoustic units that correlate to semantically visual objects. The general this is a very interesting direction of research as it allows for a richer representation of data: regularizing visual signal with audio and visa versa. This allows for training of visual models from video, etc. \n\nA major concern is the amount of novelty between this work and the author's previous publication at NIPs 2016. The authors claim a more sophisticated architecture and indeed show an improvement in recall. However, the improvements are marginal, and the added complexity to the architecture is a bit ad hoc. Clustering and grouping in section 4, is hacky. Instead of gridding the image, the authors could actually use an object detector (SSD, Yolo, FasterRCNN, etc.) to estimate accurate object proposals; rather than using k-means, a spectral clustering approach would alleviate the gaussian assumption of the distributions. In assigning visual hypotheses with acoustic segments, some form of bi-partite matching should be used.\n\nOverall, I really like this direction of research, and encourage the authors to continue developing algorithms that can train from such multimodal datasets. However, the work isn't quite novel enough from NIPs 2016.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning Word-Like Units from Joint Audio-Visual Analylsis
["David Harwath", "James R. Glass"]
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words ``lighthouse'' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
["Speech", "Computer vision", "Deep learning", "Multi-modal learning", "Unsupervised Learning", "Semi-Supervised Learning"]
https://openreview.net/forum?id=Bkbc-Vqeg
https://openreview.net/pdf?id=Bkbc-Vqeg
https://openreview.net/forum?id=Bkbc-Vqeg&noteId=BJN_Eab4e
Under review as a conference paper at ICLR 2017LEARNING WORD -LIKE UNITS FROM JOINT AUDIO -VISUAL ANALYSISDavid Harwath and James R. GlassComputer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge, MA 02139, USAfdharwath,glass g@mit.eduABSTRACTGiven a collection of images and spoken audio captions, we present a method fordiscovering word-like acoustic units in the continuous speech signal and ground-ing them to semantically relevant image regions. For example, our model is ableto detect spoken instances of the words “lighthouse” within an utterance and as-sociate them with image regions containing lighthouses. We do not use any formof conventional automatic speech recognition, nor do we use any text transcrip-tions or conventional linguistic annotations. Our model effectively implements aform of spoken language acquisition, in which the computer learns not only torecognize word categories by sound, but also to enrich the words it learns withsemantics by grounding them in images.1 I NTRODUCTION1.1 P ROBLEM STATEMENT AND MOTIVATIONAutomatically discovering words and other elements of linguistic structure from continuous speechhas been a longstanding goal in computational linguists, cognitive science, and other speech pro-cessing fields. Practically all humans acquire language at a very early age, but this task has provento be an incredibly difficult problem for computers. While conventional automatic speech recogni-tion (ASR) systems have a long history and have recently made great strides thanks to the revival ofdeep neural networks (DNNs), their reliance on highly supervised training paradigms has essentiallyrestricted their application to the major languages of the world, accounting for a small fraction of themore than 7,000 human languages spoken worldwide (Lewis et al., 2016). The main reason for thislimitation is the fact that these supervised approaches require enormous amounts of very expensivehuman transcripts. Moreover, the use of the written word is a convenient but limiting convention,since there are many oral languages which do not even employ a writing system. In constrast, in-fants learn to communicate verbally before they are capable of reading and writing - so there is noinherent reason why spoken language systems need to be inseparably tied to text.The key contribution of this paper has two facets. First, we introduce a methodology capable of notonly discovering word-like units from continuous speech at the waveform level with no additionaltext transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the se-mantics of those units via visual associations. Although we evaluate our algorithm on an Englishcorpus, it could conceivably run on any language without requiring any text or associated ASR ca-pability. Second, from a computational perspective, our method of speech pattern discovery runs inlinear time. Previous work has presented algorithms for performing acoustic pattern discovery incontinuous speech (Park & Glass, 2008; Jansen et al., 2010; Jansen & Van Durme, 2011) withoutthe use of transcriptions or another modality, but those algorithms are limited in their ability to scaleby their inherent O(n2)complexity, since they do an exhaustive comparison of the data against it-self. Our method leverages correlated information from a second modality - the visual domain - toguide the discovery of words and phrases. This enables our method to run in O(n)time, and wedemonstrate it scalability by discovering acoustic patterns in over 522 hours of audio data.1Under review as a conference paper at ICLR 20171.2 P REVIOUS WORKA sub-field within speech processing that has garnered much attention recently is unsupervisedspeech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by Park &Glass (2008), which discovers repetitions of the same words and phrases in a collection of untran-scribed acoustic data. Many subsequent efforts extended these ideas(Jansen et al., 2010; Jansen &Van Durme, 2011; Dredze et al., 2010; Harwath et al., 2012; Zhang & Glass, 2009). Alternativeapproaches based on Bayesian nonparametric modeling (Lee & Glass, 2012; Ondel et al., 2016)employed a generative model to cluster acoustic segments into phoneme-like categories, and relatedworks aimed to segment and cluster either reference or learned phoneme-like tokens into word-likeand higher-level units (Johnson, 2008; Goldwater et al., 2009; Lee et al., 2015).In parallel, the computer vision and NLP communities have begun to leverage deep learning tocreate multimodal models of images and text. Many works have focused on generating annotationsor text captions for images (Socher & Li, 2010; Frome et al., 2013; Socher et al., 2014; Karpathyet al., 2014; Karpathy & Li, 2015; Vinyals et al., 2015; Fang et al., 2015; Johnson et al., 2016). Oneinteresting intersection between word induction from phoneme strings and multimodal modeling ofimages and text is that of Gelderloos & Chrupaa (2016), who uses images to segment words withincaptions at the phoneme string level. Several recent papers have taken these ideas beyond text,and attempted to relate images to spoken audio captions directly at the waveform level (Harwath &Glass, 2015; Harwath et al., 2016).While supervised object detection is a standard problem in the vision community, several recentworks have tackled the problem of weakly-supervised or unsupervised object localization (Bergamoet al., 2014; Cho et al., 2015; Zhou et al., 2015; Cinbis et al., 2016). Although the focus of thiswork is discovering acoustic patterns, in the process we jointly associate the acoustic patterns withclusters of image crops, which we demonstrate capture visual patterns as well.2 E XPERIMENTAL DATAWe employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset(Zhou et al., 2014), corresponding to over 522 hours of speech data. The captions were collected us-ing Amazon’s Mechanical Turk service, in which workers were shown images and asked to describethem verbally in a free-form manner. Our data collection scheme is described in detail in Harwathet al. (2016), but the experiments in this paper leverage nearly twice the amount of data. For trainingour multimodal neural network as well as the pattern discovery experiments, we use a subset of214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the performanceof the multimodal network’s retrieval ability. Because we lack ground truth text transcripts for thedata, we used Google’s Speech Recognition public API to generate proxy transcripts which we usewhen analyzing our system. Note that the ASR was only used for analysis of the results, and wasnot involved in any of the learning.3 A UDIO -VISUAL EMBEDDING NEURAL NETWORKSWe first train a deep multimodal embedding network similar in spirit to the one described in Har-wath et al. (2016), but with a more sophisticated architecture. The model is trained to map entireimage frames and entire spoken captions into a shared embedding space; however, as we will show,the trained network can then be used to localize patterns corresponding to words and phrases withinthe spectrogram, as well as visual objects within the image by applying it to small sub-regions ofthe image and spectrogram. The model is comprised of two branches, one which takes as input im-ages, and the other which takes as input spectrograms. The image network is formed by taking theoff-the-shelf VGG 16 layer network (Simonyan & Zisserman, 2014) and replacing the softmax clas-sification layer with a linear transform which maps the 4096-dimensional activations of the secondfully connected layer into our 1024-dimensional multimodal embedding space. In our experiments,the weights of this projection layer are trained, but the layers taken from the VGG network belowit are kept fixed. The second branch of our network analyzes speech spectrograms as if they wereblack and white images. Our spectrograms are computed using 40 log Mel filterbanks with a 25msHamming window and a 10ms shift. Therefore, the input to this branch always has 1 color channel2Under review as a conference paper at ICLR 2017and is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spec-trogram varies depending upon the duration of the spoken caption, with each pixel corresponding toapproximately 10 milliseconds worth of audio. The specific network architecture we use is shownbelow, where C denotes the number of convolutional channels, W is filter width, H is filter height,and S is pooling stride.1. Convolution with C=128, W=1, H=40, ReLU2. Convolution with C=256, W=11, H=1, ReLU, maxpool with W=3, H=1, S=23. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=24. Convolution with C=512, W=17, H=1, ReLU, maxpool with W=3, H=1, S=25. Convolution with C=1024, W=17, H=1, ReLU6. Meanpool over entire caption width followed by L2 normalizationIn practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e.,10sec of speech) by applying truncation or zero padding; this introduces computational savings andwas shown in Harwath et al. (2016) to only slightly degrade the performance. Additionally, both theimages and spectrograms are mean normalized before training. The overall multimodal network isformed by tying together the image and audio branches with a layer which takes both of their outputvectors and computes an inner product between them, representing the similarity score between agiven image/caption pair. We train the network to assign high scores to matching image/captionpairs, and lower scores to mismatched pairs. The objective function and training procedure we useis identical to that described in Harwath et al. (2016), but we briefly describe it here.Within a minibatch of Bimage/caption pairs, let Spj,j= 1;:::;B denote the similarity score ofthejthimage/caption pair as output by the neural network. Next, for each pair we randomly sampleone impostor caption and one impostor image from the same minibatch. Let Sijdenote the similarityscore between the jthcaption and its impostor image, and Scjbe the similarity score between thejthimage and its impostor caption. The total loss for the entire minibatch is then computed asL() =BXj=1max(0;ScjSpj+ 1) + max(0 ;SijSpj+ 1): (1)We train the neural network with 50 epochs of stochastic gradient descent using a batch size B=128, a momentum of 0.9, and a learning rate of 1e-5 which is set to geometrically decay by a factorbetween 2 and 5 every 5 to 10 epochs.4 F INDING AND CLUSTERING AUDIO -VISUAL CAPTION GROUNDINGSAlthough we have trained our multimodal network to compute embeddings at the granularity ofentire images and entire caption spectrograms, we can easily apply it in a more localized fashion.In the case of images, we can simply take any arbitrary crop of an original image and resize itto 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirelyconvolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector nomatter the extent of the input. The bigger question is where to locally apply the networks in order todiscover meaningful acoustic and visual patterns.Given an image and its corresponding spoken audio caption, we use the term grounding to referto extracting meaningful segments from the caption and associating them with an appropriate sub-region of the image. For example, if an image depicted a person eating ice cream and its captioncontained the spoken words “A person is enjoying some ice cream,” an ideal set of groundings wouldentail the acoustic segment containing the word “person” linked to a bounding box around the per-son, and the segment containing the word “ice cream” linked to a box around the ice cream. We usea constrained brute force ranking scheme to evaluate all possible groundings (with a restricted gran-ularity) between an image and its caption. Specifically, we divide the image into a grid, and extractall of the image crops whose boundaries sit on the grid lines. Because we are mainly interested inextracting regions of interest and not high precision object detection boxes, to keep the number ofproposal regions under control we impose several restrictions. First, we use a 10x10 grid on eachimage regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:33Under review as a conference paper at ICLR 2017and 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes.Third, we define a minimum bounding width as 30% of the original image width, and similarly aminimum height as 30% of the original image height. In practice, this results in a few thousandproposal regions per image.To extract proposal segments from the audio caption spectrogram, we similarly define a 1-dim gridalong the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. Weimpose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implyingthat our discovered acoustic patterns are restricted to fall between 0.5 and 1 second in duration. Thenumber of proposal segments will vary depending on the caption length, and typically number in theseveral thousands. Note that when learning groundings we consider the entire audio sequence, anddo not incorporate the 10sec duration constraint imposed during the first stage of learning.Figure 1: An example of our grounding method. The left image displays a grid defining the allowedstart and end coordinates for the bounding box proposals. The bottom spectrogram displays severalaudio region proposals drawn as the families of stacked red line segments. The image on the rightand spectrogram on the top display the final output of the grounding algorithm. The top spectrogramalso displays the time-aligned text transcript of the caption, so as to demonstrate which words werecaptured by the groundings. In this example, the top 3 groundings have been kept, with the colorsindicating the audio segment which is grounded to each bounding box.Once we have extracted a set of proposed visual bounding boxes and acoustic segments for a givenimage/caption pair, we use our multimodal network to compute a similarity score between eachunique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, andsimilarity score constitutes a proposed grounding. A naive approach would be to simply keep thetopNgroundings from this list, but in practice we ran into two problems with this strategy. First,many proposed acoustic segments capture mostly silence due to pauses present in natural speech.We solve this issue by using a simple voice activity detector (V AD) which was trained on the TIMITcorpus(Garofolo et al., 1993). If the V AD estimates that 40% or more of any proposed acousticsegment is silence, we discard that entire grounding. The second problem we ran into is the factthat the top of the sorted grounding list is dominated by highly overlapping acoustic segments. Thismakes sense, because highly informative content words will show up in many different groundingswith slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding fromthe top of the proposal list we compare the interval intersection over union (IOU) of its acousticsegment against all acoustic segments already accepted for further consideration. If the IOU exceedsa threshold of 0.1, we discard the new grounding and continue moving down the list. We stopaccumulating groundings once the scores fall to below 50% of the top score in the “keep” list, orwhen 10 groundings have been added to the “keep” list, whichever comes first. Figure 1 displays apictorial example of our grounding procedure.4Under review as a conference paper at ICLR 2017Once we have completed the grounding procedure, we are left with a small set of regions of interestin each image and caption spectrogram. We use the respective branches of our multimodal networkto compute embedding vectors for each grounding’s image crop and acoustic segment. We thenemployk-means clustering separately on the collection of image embedding vectors as well as thecollection of acoustic embedding vectors. The last step is to establish an affinity score between eachimage clusterIand each acoustic cluster A; we do so using the equationAffinity (I;A) =Xi2IXa2Ai>aPair(i;a) (2)where iis an image crop embedding vector, ais an acoustic segment embedding vector, andPair(i;a)is equal to 1 when iandabelong to the same grounding pair, and 0 otherwise. Afterclustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a setof linkages describing which acoustic clusters are associated with which image clusters. In the nextsection, we investigate the properties of these clusters in more detail.5 E XPERIMENTS AND ANALYSISWe trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with animage search (given caption, find image) and annotation (given image, find caption) task similar tothe one used in Harwath et al. (2016); Karpathy et al. (2014); Karpathy & Li (2015). The imageannotation and search recall scores on a 1,000 image/caption pair held-out test set are shown inTable 1, and are compared against the model architecture used in Harwath et al. (2016). We thenperformed the grounding and pattern clustering steps on the entire training dataset. This resulted ina total of 1,161,305 unique grounding pairs.In order to evaluate the acoustic pattern discovery and clustering, we wish to assign a label to eachcluster and cluster member, but this is not completely straightforward since each acoustic segmentmay capture part of a word, a whole word, multiple words, etc. Our strategy is to force-align theGoogle recognition hypothesis text to the audio, and then assign a label string to each acousticsegment based upon which words it overlaps in time. The alignments are created with the help of aKaldi (Povey et al., 2011) speech recognizer based on the standard WSJ recipe and trained using theGoogle ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped30% or more by the acoustic segment is included in the label string for the segment. We thenemploy a majority vote scheme to derive the overall cluster labels. When computing the purity of acluster, we count a cluster member as matching the cluster label as long as the overall cluster labelappears in the member’s label string. In other words, an acoustic segment overlapping the words “thelighthouse” would receive credit for matching the overall cluster label “lighthouse”. Several exampleclusters and a breakdown of the labels of their members are shown in Table 2. We investigated somesimple schemes for predicting highly pure clusters, and found that the empirical variance of thecluster members (average squared distance to the cluster centroid) was a good indicator. Figure 2displays a scatter plot of cluster purity weighted by the natural log of the cluster size against theempirical variance. Large, pure clusters are easily predicted by their low empirical variance, whilea high empirical variance is indicative of a garbage cluster.Ranking a set of k= 500 acoustic clusters by their variance, Table 3 displays some statistics for the50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and theirlabels reflect interesting object categories being identified by the neural network. We additionallycompute the coverage of each cluster by counting the total number of instances of the cluster labelanywhere in the training data, and then compute what fraction of those instances were capturedby the cluster. We notice many examples of high coverage clusters, e.g. the “skyscraper” clustercaptures 84% of all occurrences of the word “skyscraper” anywhere in the training data, while the“baseball” cluster captures 86% of all occurrences of the word “baseball”. This is quite impressivegiven the fact that no conventional speech recognition was employed, and neither the multimodalneural network nor the grounding algorithm had access to the text transcripts of the captions.To get an idea of the impact of the kparameter as well as a variance-based cluster pruning thresholdbased on Figure 2, we swept kfrom 250 to 2000 and computed a set of statistics shown in Table4. We compute the standard overall cluster purity evaluation metric in addition to the average cov-erage across clusters. The table shows the natural tradeoff between cluster purity and redundancy5Under review as a conference paper at ICLR 2017(indicated by the average cluster coverage) as kis increased. In all cases, the variance-based clus-ter pruning greatly increases both the overall purity and average cluster coverage metrics. We alsonotice that more unique cluster labels are discovered with a larger k.Next, we examine the image clusters. Figure 3 displays the 9 most central image crops for a setof 10 different image clusters, along with the majority-vote label of each image cluster’s associatedaudio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label.We include many more example image clusters in Appendix A.Finally, we wish to examine the semantic embedding space in more depth. We took the top 150clusters from the same k= 500 clustering run described in Table 3 and performed t-SNE (van derMaaten & Hinton, 2008) analysis on the cluster centroid vectors. We projected each centroid downto 2 dimensions and plotted their majority-vote labels in Figure 4. Immediately we see that differentclusters which capture the same label closely neighbor one another, indicating that distances in theembedding space do indeed carry information discriminative across word types (and suggesting thata more sophisticated clustering algorithm than k-means would perform better). More interestingly,we see that semantic information is also reflected in these distances. The cluster centroids for “lake,”“river,” “body,” “water,” “waterfall,” “pond,” and “pool” all form a tight meta-cluster, as do “restau-rant,” “store,” “shop,” and “shelves,” as well as “children,” “girl,” “woman,” and “man.” Many othersemantic meta-clusters can be seen in Figure 4, suggesting that the embedding space is capturinginformation that is highly discriminative both acoustically andsemantically.Table 1: Results for image search and annotation on the Places audio caption data (214k trainingpairs, 1k testing pairs). Recall is shown for the top 1, 5, and 10 hits. The model we use in thispaper is compared against the meanpool variant of the model architecture presented in Harwathet al. (2016). For both training and testing, the captions were truncated/zero-padded to 10 seconds.Search AnnotationModel R@1 R@5 R@10 R@1 R@5 R@10(Harwath et al., 2016) 0.090 0.261 0.372 0.098 0.266 0.352This work 0.112 0.312 0.431 0.120 0.307 0.438Figure 2: Scatter plot of audio cluster purityweighted by log cluster size against clustervariance for k= 500 (least-squares line su-perimposed).Word Count Word Countocean 2150 castle 766(silence) 127 (silence) 70the ocean 72 capital 39blue ocean 29 large castle 24body ocean 22 castles 23oceans 16 (noise) 21ocean water 16 council 13(noise) 15 stone castle 12of ocean 14 capitol 10oceanside 14 old castle 10Table 2: Examples of the breakdown ofword/phrase identities of several acoustic clusters6 C ONCLUSIONS AND FUTURE WORKIn this paper, we have demonstrated that a neural network trained to associate images with the wave-forms representing their spoken audio captions can successfully be applied to discover and clusteracoustic patterns representing words or short phrases in untranscribed audio data. An analogousprocedure can be applied to visual images to discover visual patterns, and then the two modali-6Under review as a conference paper at ICLR 2017sky grass sunset ocean rivercastle couch wooden lighthouse trainFigure 3: The 9 most central image crops from several image clusters, along with the majority-votelabel of their most associated acoustic pattern clusterTable 3: Top 50 clusters with k= 500 sorted by increasing variance. Legend: jCcjis acousticcluster size,jCijis associated image cluster size, Pur. is acoustic cluster purity, 2is acousticcluster variance, and Cov. is acoustic cluster coverage. A dash (-) indicates a cluster whose majoritylabel is silence.Trans jCcj jCij Pur.2Cov. Trans jCcj jCij Pur.2Cov.- 1059 3480 0.70 0.26 - snow 4331 3480 0.85 0.26 0.45desert 1936 2896 0.82 0.27 0.67 kitchen 3200 2990 0.88 0.28 0.76restaurant 1921 2536 0.89 0.29 0.71 mountain 4571 2768 0.86 0.30 0.38black 4369 2387 0.64 0.30 0.17 skyscraper 843 3205 0.84 0.30 0.84bridge 1654 2025 0.84 0.30 0.25 tree 5303 3758 0.90 0.30 0.16castle 1298 2887 0.72 0.31 0.74 bridge 2779 2025 0.81 0.32 0.41- 2349 2165 0.31 0.33 - ocean 2913 3505 0.87 0.33 0.71table 3765 2165 0.94 0.33 0.23 windmill 1458 3752 0.71 0.33 0.76window 1890 2795 0.85 0.34 0.21 river 2643 3204 0.76 0.35 0.62water 5868 3204 0.90 0.35 0.27 beach 1897 2964 0.79 0.35 0.64flower 3906 2587 0.92 0.35 0.67 wall 3158 3636 0.84 0.35 0.23sky 4306 6055 0.76 0.36 0.34 street 2602 2385 0.86 0.36 0.49golf course 1678 3864 0.44 0.36 0.63 field 3896 3261 0.74 0.36 0.37tree 4098 3758 0.89 0.36 0.13 lighthouse 1254 1518 0.61 0.36 0.83forest 1752 3431 0.80 0.37 0.56 church 2503 3140 0.86 0.37 0.72people 3624 2275 0.91 0.37 0.14 baseball 2777 1929 0.66 0.37 0.86field 2603 3922 0.74 0.37 0.25 car 3442 2118 0.79 0.38 0.27people 4074 2286 0.92 0.38 0.17 shower 1271 2206 0.74 0.38 0.82people walking 918 2224 0.63 0.38 0.25 wooden 3095 2723 0.63 0.38 0.28mountain 3464 3239 0.88 0.38 0.29 tree 3676 2393 0.89 0.39 0.11- 1976 3158 0.28 0.39 - snow 2521 3480 0.79 0.39 0.24water 3102 2948 0.90 0.39 0.14 rock 2897 2967 0.76 0.39 0.26- 2918 3459 0.08 0.39 - night 3027 3185 0.44 0.39 0.59station 2063 2083 0.85 0.39 0.62 chair 2589 2288 0.89 0.39 0.22building 6791 3450 0.89 0.40 0.21 city 2951 3190 0.67 0.40 0.50ties can be linked, allowing the network to learn e.g. that spoken instances of the word “train” areassociated with image regions containing trains. This is done without the use of a conventional au-tomatic speech recognition system and zero text transcriptions, and therefore is completely agnosticto the language in which the captions are spoken. Further, this is done in O(n)time with respectto the number of image/caption pairs, whereas previous state-of-the-art acoustic pattern discoveryalgorithms which leveraged acoustic data alone run in O(n2)time. We demonstrate the success ofour methodology on a large-scale dataset of over 214,000 image/caption pairs, comprising over 522hours of spoken audio data. We have shown that the shared multimodal embedding space learnedby our model is discriminative not only across visual object categories, but also acoustically andse-mantically across spoken words. To the best of our knowledge, this paper contains by far the largestscale speech pattern discovery experiment ever performed, as well as the first ever successful effort7Under review as a conference paper at ICLR 2017Table 4: Clustering statistics of the acoustic clusters for various values of kand different settingsof the variance-based cluster pruning threshold. Legend: jCj= number of clusters remaining afterpruning,jXj= number of datapoints after pruning, Pur = purity, jLj= number of unique clusterlabels, AC = average cluster coverage2<0:9 2<0:65kjCj jXj PurjLj ACjCj jXj PurjLj AC250 249 1081514 .364 149 .423 128 548866 .575 108 .463500 499 1097225 .396 242 .332 278 623159 .591 196 .375750 749 1101151 .409 308 .406 434 668771 .585 255 .4501000 999 1103391 .411 373 .336 622 710081 .568 318 .3821500 1496 1104631 .429 464 .316 971 750162 .566 413 .3662000 1992 1106418 .431 540 .237 1354 790492 .546 484 .271Figure 4: t-SNE analysis of the 150 lowest-variance audio pattern cluster centroids for k= 500 .Displayed is the majority-vote transcription of the each audio cluster. All clusters shown containeda minimum of 583 members and an average of 2482, with an average purity of .668.to learn the semantics of the discovered acoustic patterns by grounding them to patterns which arejointly discovered in another modality (images).The future directions in which this research could be taken are incredibly fertile. Because our methodcreates a segmentation as well as an alignment between images and their spoken captions, a genera-tive model could be trained using these alignments. The model could provide a spoken caption for anarbitrary image, or even synthesize an image given a spoken description. Modeling improvementsare also possible, aimed at the goal of incorporating both visual and acoustic localization into theneural network itself. Additionally, by collecting a second dataset of captions for our images in a dif-ferent language, such as Spanish, our model could be extended to learn the acoustic correspondencesfor a given object category in both languages. This paves the way for creating a speech-to-speechtranslation model not only with absolutely zero need for any sort of text transcriptions, but also withzero need for directly parallel linguistic data or manual human translations.REFERENCESAlessandro Bergamo, Loris Bazzani, Dragomir Anguelov, and Lorenzo Torresani. Self-taught object localiza-tion with deep networks. CoRR , abs/1409.3964, 2014. URL http://arxiv.org/abs/1409.3964 .Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization inthe wild: Part-based matching with bottom-up region proposals. In Proceedings of CVPR , 2015.8Under review as a conference paper at ICLR 2017Ramazan Cinbis, Jakob Verbeek, and Cordelia Schmid. Weakly supervised object localization with multi-foldmultiple instance learning. In IEEE Transactions on Pattern Analysis and Machine Intelligence , 2016.Mark Dredze, Aren Jansen, Glen Coppersmith, and Kenneth Church. NLP on spoken documents without ASR.InProceedings of EMNLP , 2010.Hao Fang, Saurabh Gupta, Forrest Iandola, Srivastava Rupesh, Li Deng, Piotr Dollar, Jianfeng Gao, XiaodongHe, Margaret Mitchell, Platt John C., C. Lawrence Zitnick, and Geoffrey Zweig. From captions to visualconcepts and back. In Proceedings of CVPR , 2015.Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, andTomas Mikolov. Devise: A deep visual-semantic embedding model. In Proceedings of the Neural Informa-tion Processing Society , 2013.John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallet, Nancy Dahlgren, and Victor Zue.The TIMIT acoustic-phonetic continuous speech corpus, 1993.Lieke Gelderloos and Grzegorz Chrupaa. From phonemes to images: levels of representation in a recurrentneural model of visually-grounded language learning. In arXiv:1610.03342 , 2016.Sharon Goldwater, Thomas Griffiths, and Mark Johnson. A Bayesian framework for word segmentation: ex-ploring the effects of context. In Cognition, vol. 112 pp.21-54 , 2009.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. In Proceed-ings of the IEEE Workshop on Automatic Speech Recognition and Understanding , 2015.David Harwath, Timothy J. Hazen, and James Glass. Zero resource spoken audio corpus analysis. In Proceed-ings of ICASSP , 2012.David Harwath, Antonio Torralba, and James R. Glass. Unsupervised learning of spoken language with visualcontext. In Proceedings of NIPS , 2016.Aren Jansen and Benjamin Van Durme. Efficient spoken term discovery using randomized algorithms. InProceedings of IEEE Workshop on Automatic Speech Recognition and Understanding , 2011.Aren Jansen, Kenneth Church, and Hynek Hermansky. Toward spoken term discovery at scale with zeroresources. In Proceedings of Interspeech , 2010.Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks fordense captioning. In Proceedings of CVPR , 2016.Mark Johnson. Unsupervised word segmentation for sesotho using adaptor grammars. In Proceedings of ACLSIG on Computational Morphology and Phonology , 2008.Andrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descriptions. InProceedings of CVPR , 2015.Andrej Karpathy, Armand Joulin, and Fei-Fei Li. Deep fragment embeddings for bidirectional image sentencemapping. In Proceedings of the Neural Information Processing Society , 2014.Chia-Ying Lee and James Glass. A nonparametric Bayesian approach to acoustic model discovery. In Proceed-ings of the 2012 meeting of the Association for Computational Linguistics , 2012.Chia-Ying Lee, Timothy J. O’Donnell, and James Glass. Unsupervised lexicon discovery from acoustic input.InTransactions of the Association for Computational Linguistics , 2015.M. Paul Lewis, Gary F. Simon, and Charles D. Fennig. Ethnologue: Languages of the World, Nineteenthedition . SIL International. Online version: http://www.ethnologue.com, 2016.Lucas Ondel, Lukas Burget, and Jan Cernocky. Variational inference for acoustic unit discovery. In 5th Work-shop on Spoken Language Technology for Under-resourced Language , 2016.Alex Park and James Glass. Unsupervised pattern discovery in speech. In IEEE Transactions on Audio, Speech,and Language Processing vol. 16, no.1, pp. 186-197 , 2008.Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Han-nemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. TheKaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Under-standing , 2011.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.CoRR , abs/1409.1556, 2014.Richard Socher and Fei-Fei Li. Connecting modalities: Semi-supervised segmentation and annotation of im-ages using unaligned text corpora. In Proceedings of CVPR , 2010.Richard Socher, Andrej Karpathy, Quoc V . Le, Christopher D. Manning, and Andrew Y . Ng. Grounded com-positional semantics for finding and describing images with sentences. In Transactions of the Associationfor Computational Linguistics , 2014.Laurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-sne. In Journal ofMachine Learning Research , 2008.Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dimitru Erhan. Show and tell: A neural image captiongenerator. In Proceedings of CVPR , 2015.Yaodong Zhang and James Glass. Unsupervised spoken keyword spotting via segmental DTW on Gaussianposteriorgrams. In Proceedings ASRU , 2009.Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features forscene recognition using places database. In Proceedings of the Neural Information Processing Society , 2014.Boloi Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge indeep scene CNNs. In Proceedings of ICLR , 2015.9Under review as a conference paper at ICLR 2017A A PPENDIX : ADDITIONAL VISUALIZATIONS OF IMAGE PATTERNCLUSTERSbeach cliff pool desert fieldchair table staircase statue stonechurch forest mountain skyscraper treeswaterfall windmills window city bridgeflowers man wall archway baseballboat shelves cockpit girl childrenbuilding rock kitchen plant hallway10
BypzQJLNg
Hy8X3aKee
ICLR.cc/2017/conference/-/paper150/official/review
{"title": "Review", "rating": "4: Ok but not good enough - rejection", "review": "The paper compare three representation learning algorithms over symbolized sequences. Experiments are executed on several prediction tasks. The approach is potentially very important but the proposed algorithm is rather trivial. Besides detailed analysis on hyper parameters are not described. \n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
["Shengdong Zhang", "Soheil Bahrampour", "Naveen Ramakrishnan", "Mohak Shah"]
In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.
["heterogeneous", "classification", "problem", "variables", "data", "approaches", "deep symbolic representation", "event classification", "time series data"]
https://openreview.net/forum?id=Hy8X3aKee
https://openreview.net/pdf?id=Hy8X3aKee
https://openreview.net/forum?id=Hy8X3aKee&noteId=BypzQJLNg
Under review as a conference paper at ICLR 2017DEEPSYMBOLIC REPRESENTATION LEARNING FORHETEROGENEOUS TIME-SERIES CLASSIFICATIONShengdong Zhang1;2, Soheil Bahrampour1, Naveen Ramakrishnan1, Mohak Shah1;31Bosch Research and Technology Center, Palo Alto, CA2Simon Fraser University, Burnaby, BC3University of Illinois at Chicago, Chicago, ILsza75@sfu.ca, Soheil.Bahrampour@us.bosch.com ,Naveen.Ramakrishnan@us.bosch.com, Mohak.Shah@us.bosch.comABSTRACTIn this paper, we consider the problem of event classification with multi-variatetime series data consisting of heterogeneous (continuous and categorical) variables.The complex temporal dependencies between the variables combined with sparsityof the data makes the event classification problem particularly challenging. Moststate-of-art approaches address this either by designing hand-engineered features orbreaking up the problem over homogeneous variates. In this work, we propose andcompare three representation learning algorithms over symbolized sequences whichenables classification of heterogeneous time-series data using a deep architecture.The proposed representations are trained jointly along with the rest of the networkarchitecture in an end-to-end fashion that makes the learned features discriminativefor the given task. Experiments on three real-world datasets demonstrate theeffectiveness of the proposed approaches.1 I NTRODUCTIONRapid increase in connectivity of physical sensors and systems to the Internet is enabling largescale collection of time series data and system logs. Such temporal datasets enable applications likepredictive maintenance, service optimizations and efficiency improvements for physical assets. Atthe same time, these datasets also pose interesting research challenges such as complex dependenciesand heterogeneous nature of variables, non-uniform sampling of variables, sparsity, etc which furthercomplicates the process of feature extraction for data mining tasks. Moreover, the high dependenceof the feature extraction process on domain expertise makes the development of new data miningapplications cumbersome. This paper proposes a novel approach for feature discovery specifically fortemporal event classification problems such as failure prediction for heating systems.Feature extraction from time-series data for classification has been long studied (Mierswa & Morik,2005). For example, well-known Crest factor (Jayant & Noll, 1984) and Kurtosis method (Altmann,2004) extract statistical measures of the amplitude of time-series sensory data. Other popularalgorithms include feature extraction using frequency domain methods, such as power spectraldensity (Li et al., 2002), or time-frequency domain such as wavelet coefficients (Lu et al., 2014). Morerecent methods include wavelet synchrony (Mirowski et al., 2009), symbolic dynamic filtering (Gupta& Ray, 2007; Bahrampour et al., 2013) and sparse coding (Huang & Aviyente, 2006; Bahrampouret al., 2013). On the other hand, summary statistics such as count, occurrence rate, and duration havebeen used as features for event data (Murray et al., 2005).These feature extraction algorithms are usually performed as a pre-processing step before traininga classifier on the extracted features and thus are not guaranteed to be optimally discriminative fora given learning task. Several recent works have shown that better performance can be achievedwhen a feature extraction algorithm is jointly trained along with a classifier in an end-to-end fashion.For example, in Mairal et al. (2012); Bahrampour et al. (2016), dictionaries are trained jointlywith classifiers to extract discriminative sparse codes as feature. Recent successes of deep learningAll authors were with the Bosch Research and Technology Center at the time this work is done.1Under review as a conference paper at ICLR 2017methods (Goodfellow et al., 2016) on extracting discriminative features from raw data and achievingstate-of-the-art performance have boosted the effort for automatic feature discovery in several domainsincluding speech (Krizhevsky et al., 2012b), image (Krizhevsky et al., 2012a), and text (Sutskeveret al., 2014) data. In particular, it has been shown that recurrent neural networks (Elman, 1990) andits variants such as LSTMs (Hochreiter & Schmidhuber, 1997; Graves et al., 2013) are capable ofcapturing long-term time-dependency between input features and thus are well suited for featurediscovery from time-series data.While neural networks have also been used for event classification, these efforts have been mostlyfocused on either univariate signal (Hannun et al., 2014) or uniformly sampled multi-variate time-series data (Mirowski et al., 2009). In this paper, we focus on event classification task (and eventprediction task that can be reformulated as event classification), where the application data consistsof multi-variate, heterogeneous (categorical and continuous) and non-uniformly sampled time-seriesdata. This includes a wide variety of application domains such as sensory data for internet of things,health care, system logs from data center, etc. Following are the main contributions of the paper:We propose three representation learning algorithms for time-series classification. The pro-posed algorithms are formulated as embedding layers, which receive symbolized sequencesas their input. The embedding layer is then trained jointly with a deep learning architec-ture (such as convolutional or recurrent network) to automatically extract discriminatingrepresentations for the given classification task. The proposed algorithms differ in the waythey embed the symbolized data and are named as Word Embedding, Shared Character-wiseEmbedding, and Independent Character-wise Embedding.The deep learning architectures combined with the proposed algorithms provide a unifiedframework to handle heterogeneous time-series data which regularly occur in most sensordata mining applications. They uniformly map data of any type into a continuous space,which enables representation learning within the space. We will provide detailed discus-sions on the suitability of the proposed representations and their respective strengths andlimitations.We show that the proposed algorithms achieve state-of-the-art performance comparedto both a standard deep architecture without symbolization and also compared to otherclassification approaches with hand-engineered features from domain experts. This isshown with experimental results on three real-world applications including hard disk failureprediction, seizure prediction, and heating system fault prediction.2 S YMBOLIZATIONSymbolization has been widely used as first step for feature extraction on time-series data, providinga more compact representation and acting as a filter to remove noise. Moreover, symbolization can beused to deal with heterogeneity of the data where multi-variate time-series contain both categorical,ordinal, and continuous variables. For example, symbolized sequences are used in Bahrampour et al.(2013) to construct a probabilistic finite state automata and a measure on the corresponding statetransition matrix is then used as final feature which is fed into a classifier. However, this kind offeatures, which are extracted without explicitly optimizing for the given discriminative task, aretypically suboptimal and are not guaranteed to be discriminative. Moreover, incorporating symbol-based representations and jointly training a deep network is non-trivial. In this work, we propose aunified architecture to embed the symbolized sequence as an input representation and jointly traina deep network to learn discriminative features. Symbolization for a discrete variable is trivial asthe number of symbols is equal to the number of available categories. For continuous variables, thisrequires partitioning (also known as quantization) of data given an alphabet size (or the symbol set).The signal space for each continuous variable, approximated by training set, can be partitioned intoa finite number of cells that are labeled as symbols using a clustering algorithm such as uniformpartitioning, maximum entropy partitioning (Rajagopalan & Ray, 2006), or Jenks natural breaksalgorithm (Jenks, 1967). The alphabet size for continuous variable is a hyper-parameter which can betuned by observing empirical marginal distributions.Figures 1 and 2 illustrate the symbolization procedure in a simple example converting a synthetic 2dimensional time series fZ1; Z2ginto a sequence of representations. The histogram of continuousvariable Z1contains two Gaussian-like distributions and thus is partitioned into 2splits, i.e. for2Under review as a conference paper at ICLR 2017−2 0 2 4 6 8 10 120100200300400500600700800Split that separates two clustersFigure 1: Partitiong of continous variable Z1based on its histogram.time 0 1 2Z1Z22:1 11 5C5C2C1)aZ1bZ1aZ1eZ2bZ2aZ2%!&WdEvaevbbvaaSCEv1a+v2ev1b+v2bv1a+v2aICE(v1av2e)T(v1bv2b)T(v1av2a)TFigure 2: Heterogeneous time-series symbolization along with word embedding (WdE), sharedcharacter-wise embedding (SCE), and independent character-wise embedding (ICE).any realization of this variable in the time series, the value is replaced with symbol aZ1if it isless than 7, and with bZ1otherwise. For discrete variable Z2, assuming it has 5 categories Z22fC1; C2; C3; C4; C5g, we assign symbol aZ2toC1, symbol bZ2toC2, and so on.3 R EPRESENTATION LEARNINGIn this section, we propose three methods to learn representation from symbolized data.3.1 W ORD EMBEDDING (WDE)Symbolized sequences at each time-step can be used to form a word by orderly collecting the symbolrealizations of the variables. Thus, each time-series is represented by a sequence of words where eachword represents the state of the multi-variate input at a given time. In Figures 2, word embeddingvector (WdE) for word wis shown as vw. Each word of the symbolized sequence is considered as adescriptor of a “pattern” at a given time step. Even though the process of generating words ignoresdependency among variables, it is reasonable to hypothesize that as long as a dataset is large enoughand representative patterns occur frequently, an embedding layer along with a deep architectureshould be able to capture the dependencies among the “patterns”. The set of words on training dataconstruct a vocabulary. Rare words are excluded from the vocabulary and are all represented using aout-of-vocabulary (OOV) word. OOV word is also used to represent words in test set which are notpresent in training data.One natural choice for learning representation of the symbolized sequence is to learn embeddings ofthe words within the vocabulary. This is done by learning an embedding matrix 2Rdvwheredis the embedding size and vis the vocabulary size (including OOV word), similar to learningword embedding in a natural language processing task (Dai & Le, 2015). One difference is that allwords here have same length as the number of input variables. Each multi-variate sample is thusrepresented using a d-dimensional vector. It should be noted that the embedding matrix is learnedjointly along with the rest of the network to learn discriminative representations. It should also benoted that although the problem of having rare words in training data is somewhat addressed by usingOOV embedding vectors, this can limit the representation power if symbolization results in too manylow-frequency words. Therefore, the quality of learning with word embeddings highly depends onthe cardinality of the symbol set and the splits used for symbolization.3Under review as a conference paper at ICLR 2017Figure 3: Empirical Probability Density of an counitous variable Zis shown along with the corre-sponding independent character-wise embeddings. The vairable has 4symbols which are initiazed tomaintain the ordered information. During training, and after each gradient update, the representationsare sorted to enforce the ordered constraint.3.2 S HARED CHARACTER -WISE EMBEDDING (SCE)The proposed word-embedding representation can capture the relation among multiple input variablesgiven sufficient amount of training samples. However, as discussed in previous section, the proposedword-embedding representation learning needs careful selection of the alphabet size to avoid havingtoo many low-frequency words and thus is inherently implausible to use in applications where thenumber of input time-series are too large. In this section, we propose an alternative character-levelrepresentation, which we call Shared Character-wise Embedding (SCE), to address this limitationwhile still being able to capture the dependencies among the inputs.Instead of forming words at each time step, we use character embedding to represent each symboland each observation at a time step is represented by the sum of all embedding vectors at a giventime step. To formulate this, consider an m-dimensional time-series data where the symbol size forthei-th input is si. Leteil2Rsibe the one-hot representation for symbol lof the i-th input and vilbe the corresponding embedding vector. Also Let = [V1: : :Vm]2RdPisibe the embeddingmatrix where Vi2Rdsiis the collection of the embedding vectors for i-th input. Then, a giveninput sample x1; x2; :::; xmis represented asPiVieixi2Rd, where xiis the symbol realization ofthei-th input. See Figure 2 for an example of the embedding (SCE) generated using this proposedrepresentation. Since the representation of each word is constructed by summing the embeddings ofindividual characters, this method does not suffer from the unseen words issues.3.3 I NDEPENDENT CHARACTER -WISE EMBEDDING (ICE)Although SCE does not suffer from the low-frequency issue of WdE, both of these representations donot capture the ordinal information in the input time series. In this section, we propose an IndependentCharacter-wise Embedding (ICE) representation that maintains ordinal information for symbolizedcontinuous variables and categorical variables that have ordered information. To enforce the order con-straint, we embed each symbol with a scalar value. Each input i2f1; : : : ; mgis embedded indepen-dently and the resulting representation for a given sample x1; : : : ; xmis[V1e1x1: : :Vmemxm]T2Rmwhere xiis the symbol realization of the ith input and Viis a row vector consisting of embeddingscalars. The possible correlation among inputs are left to be captured using following layers after theembedding layer in the network. See Figure 2 for an example of generating embedding vector (ICE)using the proposed algorithm.The embedding scalars for each symbol is initialized to satisfy the ordered information and duringtraining we make sure that the learned representations satisfy the corresponding ordinal information,i.e. the embedding scalars of an ordinal variable are sorted after each gradient update. Figure 3illustrates this process.It should be noted that the embedding layer here hasPisiparameters to learn and thus is slimmercompared to dPisiparameters of the shared character-wise embedding proposed in previoussection. Both of the proposed character-wise representations have more compact embedding layerthan the word-embedding representation that has dvparameters as vocabulary size vis usuallylarge.4Under review as a conference paper at ICLR 20174 P REDICTION ARCHITECTURE4.1 F ORMULATION OF PREDICTION PROBLEMIn this section, we formulate the event prediction problem as a classification problem. Let X=fX1;X2; : : : ;XTgbe a time-ordered collections of observation sequences (or clips) collected overTtime steps, where Xt=fxt1; : : : ;xtNtgrepresents tth sequence consisting of Ntconsecutivemeasurements. As notation indicates, it is notassumed that the number of observations withineach time step is constant. Let fl1; l2; : : : ; l Tgbe the corresponding sequence labels for X, wherelt2f0;1gencodes presence of an event within the tth time step, i.e. lt= 1 indicates that afault event is observed within the period of time input sequence Xtis collected. We define targetlabels y=fy1; y2; : : : ; y Tgwhere yt= 1 if an event is observed in the next Ktime-steps, i.e.Pt+Kj=t+1lj>0, andyt= 0otherwise. In this formulation, Kindicates the prediction horizon andyt= 0indicates that no event is observed in the next Ktime-steps, refered to as monitor window inthis paper. The prediction task is then defined as predicting ytgiven input Xtand its correspondingpast measurements fXt1;Xt2; : : : ;X1g. Using the prediction labels y, the event predictionproblem on time series data is converted into a classic binary classification problem. Note thatalthough the proposed formulation can in theory utilize allthe past measurements for classification,we usually fix a window size of Mpast measurements to limit computational complexity. Forinstance, suppose that Xt’s are sensory data measurements of a physical system collected at the tthday of its operation and let K= 7andM= 3. Then the classification problem for Xtis to predictyt, i.e., whether an event is going to be observed in the next coming week of the physical systemoperation, given current and past three days of measurements.4.2 T EMPORAL WEIGHTING FUNCTIONIn rare event prediction tasks, the number of positive data samples, i.e. data corresponding tooccurrences of a target event, is much fewer than the one of negatives. If not taken care of, thisclass imbalance problem causes that the decision boundary of a classifier to be dragged towardthe data space where negative samples are distributed, artificially increasing the overall accuracywhile resulting in low detection rate. This is a classic problem in binary classification and it is acommon practice that larger misclassification cost are associated to positive samples to address thisissue (Bishop, 2001). However, simply assigning identical larger weights to positive samples forour prediction formulation cannot emphasize the importance of temporal data close to a target eventoccurrence. We hypothesize that the data collected closer to an event occurrence should be moreindicative of the upcoming error than data collected much earlier. Therefore, we design the followingweighting function to deal with the temporal importance:wt=PKj=1(Kj+ 1)lt+jifyt= 11 ifyt= 0(1)This weighting function gives relatively smaller weights to data far from event occurrences comparedto those which are closer. In addition to temporal importance emphasis, it also deals with overlappingevents. For example, suppose that two errors are observed at time samples t+ 1andt+ 3andprediction horizon Kis set to 5. Then input sample Xtis within the monitor windows of both eventsand thus its weight is set to higher value of wt= (51 + 1) + (53 + 1) = 8 as misclassificationin this day may result in missing to predict two events. By weighting data samples in this way, aclassifier is trained to adjust its decision boundary based on the importance information.The above weight definition deals with temporal importance information for event prediction. Wealso need to re-adjust weights to address the discussed class imbalance issue. After determining theweight using Eq. 1 for each training sample, we re-normalize all weights such that the total sum ofweights of positive samples becomes equal to the total sum of weights of negative samples.The weighted cross entropy loss function is used as the optimization criterion to find parameters forour model. For the given input Xtwith weight wt, target label yt, and the predicted label ^yt, the lossfunction is defined as :l(yt;^yt) =wt(ytlog^yt+ (1yt)log(1^yt)): (2)5Under review as a conference paper at ICLR 20174.3 N ETWORK ARCHITECTUREEach of the proposed embedding layers can be used as the first layer of a deep architecture. Theembedding layer along with the rest of the architecture are learned jointly to optimize a discriminativetask, similar to natural language processing tasks (Gal, 2015). Thus, the embedding layer is trained togenerate discriminative representations. The specific architectures are further discussed in the resultssection for each experiment.5 R ESULTS5.1 H ARD-DISK FAILURE PREDICTIONBackblaze data center has released its hard drive datasets containing daily snapshot S.M.A.R.T(Self-Monitoring, Analysis and Reporting Technology) statistics for each operational hard drive from2013 to June 2016. The data of each hard drive are recorded until it fails. In this paper, the 2015subset of the data on drive model “ST3000DM001” are used. As far as we know, no other predictionalgorithm has been published on the data set of this model and thus we have generated our owntraining and test split. The data consists of several models of hard drives. There are 59;340harddrives out of which 586of them (less than 1%) had failed. The data of the following 7columnsof S.M.A.R.T raw statistics are used: 5;183;184;187;188;193;197. These columns correspondsto accumulative count values of different monitored features. We also added absolute differencebetween count values of consecutive days for each input column resulting in overall 14columns.The data has missing values which are imputed using linear interpolation. The task is formulated topredict whether there is a failure in the next K= 3days given current and past 4 days data.The dataset is randomly split into a training set (containing 486positives) and a test set (containing100positives) using hard disk serial number and without loosing the temporal information. Thustraining and test set do not share any hard disk. For the experiment using word embedding (WdE),the data are symbolized with splits determined by observation of empirical histogram of everyvariable. The vocabulary is constructed using all words that have frequency of more than one. Theremaining rare words are all mapped to the OOV word resulting into a vocabulary size of 509. Forthe experiments using shared and independent character-wise embedding, dubbed as SCE and ICErespectively, partitioning is done using maximum entropy partitioning (Bahrampour et al., 2013)with the alphabet size of 4, i.e. the first split is at the first 25-th percentile, the second split is at the50-th percentile, and so on. The size of the embedding for WdE and SCE are selected as 16 and2, respectively, using cross validation. Each of the proposed embedding layers is then followed byan LSTM (Hochreiter & Schmidhuber, 1997) layer with 8 hidden units and a fully connected layerfor binary classification. Temporal weighting is not used here as it was not seem to be effective onthis dataset, but cost-sensitive formulation is used to deal with this imbalanced dataset. As baselinemethods, we also provided the results using logistic regression classification (LR), random forest(RF), and LSTM trained on normalized raw data (without symbolization). For LR and RF, the fivedays input data are concatenated to form the feature vector. The RF algorithm consists of 1000decision trees. The LSTM networks are trained using ADAM algorithm (Kingma & Ba, 2014) withdefault learning rate of 0:001. Tabel 1 summarizes the performance on test data set. We reported thebalanced accuracy, arithmetic mean of the true positive and true negative rates, the area under curve(AUC) of ROC as performance metrics. The balanced accuracy numbers are generated by picking athreshold on ROC that maximizes true prediction while maintaining a false positive rate of maximum0:05. As it is seen, the proposed character-level embedding algorithms result in best performances. Itshould be noted that the input data is summary statistics, and not raw time-series data, and thus asseen the LR and LSTM algorithms, without symbolization, perform reasonably well.5.2 S EIZURE PREDICTIONWe have also compared the performance of the proposed algorithms for seizure prediction. The dataset is from the Kaggle American Epilepsy Society Seizure Prediction Challenge 2014 and consistsof intracranial EEG (iEEG) clips from 16 channels collected from dogs and human. We used thedata collected from dogs in our experiments, not including data from “Dog _5" as the data fromone channel is missing. We generated the test sets from training data by randomly selecting 20%of one-hour clips. The length of each clip is 240;000. We have down-sampled them to 1;200forefficient processing using recurrent networks. Five-fold cross validation is used to report the results.6Under review as a conference paper at ICLR 2017Table 1: Performance comparison of fault prediction methods on Backblaze Reliability dataset.Base-line methods include random forest (RF), logistic regression classifier (LR), and LSTM, trainedwithout symbolization. The three proposed embedding representations on symbolized data areWdE-LSTM, SCE-LSTM, and ICE-LSTM.Models Balanced Accuracy AUC of ROCRF 0.803 0.804LR 0.846 0.851LSTM 0.832 0.865WdE-LSTM 0.834 0.812SCE-LSTM 0.855 0.841ICE-LSTM 0.835 0.893Table 2: Performance comparison of the iRNN on raw data as well as iRNN using the proposedcharacter-level embedding methods on symbolized data for seizure prediction.Models Balance Accuracy AUC of ROCiRNN 0.669 0.69SCE-iRNN 0.761 0.811ICE-iRNN 0.77 0.818Table 3: Performance comparison of LR and LSTM on hand-designed features as well as the resultsgenerated using LSTM using the proposed embedding methods on symbolized data for fault predictionon Thermo-technology dataset.Models Balance Accuracy AUC of ROCLR 0.685 0.716LSTM 0.735 0.766WdE-LSTM 0.729 0.759SCE-LSTM 0.7 0.697ICE-LSTM 0.733 0.769For the WdE algorithm, we observed that 90% of total words generated, using alphabet size of 4,have frequency of one which are all mapped to OOV and has resulted in poor performance. Wealso observed that using smaller alphabet size for WdE is not helpful resulting in too much loss ofinformation and thus the performance of WdE algorithm is not reported here. For character-levelembedding algorithms, maximum entropy partitioning is used with alphabet size of 50. The networkused here consists of a one-dimensional convolutional layer with 16filters, filter length of 3andstride of 1, followed by a pooling layer of size 2 and stride 2 and an iRNN layer (Le et al., 2015) with32hidden units. We observed better results using iRNN than LSTM and thus we have reported theperformance using iRNN. We have also reported the performance using same network on raw EEGdata. The performance of these methods are summarized in Tabel 2. As it is seen, the proposed ICEembedding resulted in the best performance.5.3 H EATING SYSTEM FAILURE PREDICTIONWe also applied our method on an internal large dataset containing sensor information from thermo-technology heating systems. This dataset contains 132;755time-series of 20variables where eachtime-series is data collected within one day. Nine of the variables are continuous and the remaining11variables are categorical. The task is to predict whether a heating system will have a failure incoming week. The dataset is highly imbalanced, where more than 99% of the data have no fault in thenext seven days. After symbolizing the training data, for the experiment of using word embedding,the words that have relative frequency of less than 1%are considered as OOV words. The averagedlength of each sequence is 6;000. The embedding dimension for WdE and SCE algorithm are bothchosen as 20 using a validation set. The network architecture includes an LSTM layer with 15 hidden7Under review as a conference paper at ICLR 2017units and a fully connected layer of dimension 50 which is followed by the final fully connected layerfor binary classification. The same model architecture was used for the experiments of shared andindependent character-wise embedding.A simple trick is used to increase the use of GPU parallel computing power during the trainingphase, due to the large size of the training samples. For a given training time-series with Twords,instead of sequentially feeding entire samples to the network, time-series is first divided into Msub-sequences of maximum lengthTMwhere each of these sub-sequences are processed independentlyand in-parallel. A max-pooling layer is then used on these feature vectors to get the final featurevector which represents the entire time-series. The generated feature is fed into a feed-forward neuralnetwork with sigmoid activation function to generate the predictions for the binary classificationtask. We call the sequence division technique as sequence chopping. Even though training an LSTMwith this technique sacrifices temporal dependency longer thanTMtime steps, we have observedthat by selecting a suitable chopping size, we can achieve competitive results and at the same timesignificant speed-up of the training procedure. The performance of the model having a similar LSTMarchitecture which is trained on 119 hand-engineered features is reported in Table 3. It shouldbe noted that the hand engineered features have evolved over years of domain expertise and assetmonitoring in the field. The results indicates that our methods results in competitive performancewithout the need for the costly process of hand-designing features.6 CONCLUSIONSWe proposed three embedding algorithms on symbolized input sequence, namely WdE, SCE, andICE, for event classification on heterogeneous time-series data. The proposed methods enable feedingsymbolized time-series directly into a deep network and learn a discriminative representation in anend-to-end fashion which is optimized for the given task. The experimental results on three real-worlddatasets demonstrate the effectiveness of the proposed algorithms, removing the need to performcostly and sub-optimal process of hand-engineering features for time-series classification.REFERENCESJürgen Altmann. Acoustic and seismic signals of heavy military vehicles for co-operative verification.Journal of Sound and Vibration , 273(4):713–740, 2004.Soheil Bahrampour, Asok Ray, Soumalya Sarkar, Thyagaraju Damarla, and Nasser M. Nasrabadi.Performance comparison of feature extraction algorithms for target detection and classification.Pattern Recognition Letters , 34(16):2126 – 2134, 2013.Soheil Bahrampour, Nasser M Nasrabadi, Asok Ray, and William Kenneth Jenkins. Multimodaltask-driven dictionary learning for image classification. IEEE Trans. on Image Processing , 25(1):24–38, 2016.CM Bishop. Bishop pattern recognition and machine learning, 2001.Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in NeuralInformation Processing Systems , pp. 3079–3087, 2015.Jeffrey L Elman. Finding structure in time. Cognitive science , 14(2):179–211, 1990.Y . Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv ,2015.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MITPress, 2016. URL http://www.deeplearningbook.org .Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In 2013 IEEE international conference on acoustics, speech and signal processing ,pp. 6645–6649. IEEE, 2013.Shalabh Gupta and Asok Ray. Symbolic dynamic filtering for data-driven pattern recognition. Patternrecognition: theory and application , pp. 17–71, 2007.8Under review as a conference paper at ICLR 2017Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-endspeech recognition. arXiv preprint arXiv:1412.5567 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Ke Huang and Selin Aviyente. Sparse representation for signal classification. In Advances in neuralinformation processing systems , pp. 609–616, 2006.N. S. Jayant and Peter Noll. Digital Coding of Waveforms, Principles and Applications to Speechand Video , pp. 688. Prentice-Hall, 1984.George F. Jenks. The data model concept in statistical mapping. International yearbook of cartogra-phy, 7(1), 1967.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012a.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012b.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks ofrectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Dan Li, Kerry D Wong, Yu Hen Hu, and Akbar M Sayeed. Detection, classification, and tracking oftargets. IEEE signal processing magazine , 19(2):17–29, 2002.Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou. A hand gesture recognition frameworkand wearable gesture-based interaction prototype for mobile devices. IEEE Transactions onHuman-Machine Systems , 44(2):293–299, 2014.Julien Mairal, Francis Bach, and Jean Ponce. Task-driven dictionary learning. IEEE Transactions onPattern Analysis and Machine Intelligence , 34(4):791–804, 2012.Ingo Mierswa and Katharina Morik. Automatic feature extraction for classifying audio data. Machinelearning , 58(2-3):127–149, 2005.Piotr Mirowski, Deepak Madhavan, Yann LeCun, and Ruben Kuzniecky. Classification of patterns ofeeg synchronization for seizure prediction. Clinical neurophysiology , 120(11):1927–1940, 2009.Joseph F Murray, Gordon F Hughes, and Kenneth Kreutz-Delgado. Machine learning methods forpredicting failures in hard drives: A multiple-instance application. Journal of Machine LearningResearch , 6(May):783–816, 2005.Venkatesh Rajagopalan and Asok Ray. Symbolic time series analysis via wavelet-based partitioning.Signal Processing , 86(11):3309 – 3320, 2006.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.9Under review as a conference paper at ICLR 20177 A PPENDIXTable 4: Number of trainable parameters for the three proposed embedding layers as well as totalparameters of the network used in the three studied application. For the WdE on heating systemdata, an approximate number is provided as the vocabulary size is dependent on the training set usedamong the three cross-validations splits.Hard-Disk data Seizure data Heating System dataMethod#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparametersWdE 8144 8953 N/A N/A 713000715000SCE 112 501 2400 3569 1335 4046ICE 56 815 80 2465 69 310010
HJ1KdtWVe
Hy8X3aKee
ICLR.cc/2017/conference/-/paper150/official/review
{"title": "Interesting approach for sequence quantization and embedding", "rating": "5: Marginally below acceptance threshold", "review": "In absence of authors' responses, the rating is maintained.\n\n---\n\nThis paper introduces an approach for learning predictive time series models that can handle heterogenous multivariate sequence. The first step is in three possible ways to perform embedding of the d-dimensional sequences into d-character words, or a sum of d character embeddings, or a concatenation of d character embeddings. The embedding layer is the first layer of a deep architecture such as LSTM. The models are then trained to perform event prediction at a fixed horizon, with temporal weighting, and applied to hard disk or heating system failures or seizures.\n\nThe approach is interesting and the results seem to outperform an LSTM baseline, but need additional clarification.\n\nThe experimental section on seizure prediction is very short and would need to be considerably extended, in an appendix. What are the results obtained using LSTM vs. RNN? What is the state-of-the-art on that dataset? Given that EEG data contain mostly frequential information, how is this properly handled in per-sample embeddings?\n\nPlease also extend your reference and previous work section to include PixelRNN as well as: \n* van den Oord, et al. (2016)\nWaveNet: A Generative Model for Raw Audio\narXiv 1609.03499\n* Huang et al. (2013)\n\"Learning deep structured semantic models for web search using clickthrough data\"\nCIKM\nIn the latter paper, the authors embedded 3-gram hashes of the input sequences (e.g., text), which is somewhat similar to a time-delay embedding of the input sequence.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
["Shengdong Zhang", "Soheil Bahrampour", "Naveen Ramakrishnan", "Mohak Shah"]
In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.
["heterogeneous", "classification", "problem", "variables", "data", "approaches", "deep symbolic representation", "event classification", "time series data"]
https://openreview.net/forum?id=Hy8X3aKee
https://openreview.net/pdf?id=Hy8X3aKee
https://openreview.net/forum?id=Hy8X3aKee&noteId=HJ1KdtWVe
Under review as a conference paper at ICLR 2017DEEPSYMBOLIC REPRESENTATION LEARNING FORHETEROGENEOUS TIME-SERIES CLASSIFICATIONShengdong Zhang1;2, Soheil Bahrampour1, Naveen Ramakrishnan1, Mohak Shah1;31Bosch Research and Technology Center, Palo Alto, CA2Simon Fraser University, Burnaby, BC3University of Illinois at Chicago, Chicago, ILsza75@sfu.ca, Soheil.Bahrampour@us.bosch.com ,Naveen.Ramakrishnan@us.bosch.com, Mohak.Shah@us.bosch.comABSTRACTIn this paper, we consider the problem of event classification with multi-variatetime series data consisting of heterogeneous (continuous and categorical) variables.The complex temporal dependencies between the variables combined with sparsityof the data makes the event classification problem particularly challenging. Moststate-of-art approaches address this either by designing hand-engineered features orbreaking up the problem over homogeneous variates. In this work, we propose andcompare three representation learning algorithms over symbolized sequences whichenables classification of heterogeneous time-series data using a deep architecture.The proposed representations are trained jointly along with the rest of the networkarchitecture in an end-to-end fashion that makes the learned features discriminativefor the given task. Experiments on three real-world datasets demonstrate theeffectiveness of the proposed approaches.1 I NTRODUCTIONRapid increase in connectivity of physical sensors and systems to the Internet is enabling largescale collection of time series data and system logs. Such temporal datasets enable applications likepredictive maintenance, service optimizations and efficiency improvements for physical assets. Atthe same time, these datasets also pose interesting research challenges such as complex dependenciesand heterogeneous nature of variables, non-uniform sampling of variables, sparsity, etc which furthercomplicates the process of feature extraction for data mining tasks. Moreover, the high dependenceof the feature extraction process on domain expertise makes the development of new data miningapplications cumbersome. This paper proposes a novel approach for feature discovery specifically fortemporal event classification problems such as failure prediction for heating systems.Feature extraction from time-series data for classification has been long studied (Mierswa & Morik,2005). For example, well-known Crest factor (Jayant & Noll, 1984) and Kurtosis method (Altmann,2004) extract statistical measures of the amplitude of time-series sensory data. Other popularalgorithms include feature extraction using frequency domain methods, such as power spectraldensity (Li et al., 2002), or time-frequency domain such as wavelet coefficients (Lu et al., 2014). Morerecent methods include wavelet synchrony (Mirowski et al., 2009), symbolic dynamic filtering (Gupta& Ray, 2007; Bahrampour et al., 2013) and sparse coding (Huang & Aviyente, 2006; Bahrampouret al., 2013). On the other hand, summary statistics such as count, occurrence rate, and duration havebeen used as features for event data (Murray et al., 2005).These feature extraction algorithms are usually performed as a pre-processing step before traininga classifier on the extracted features and thus are not guaranteed to be optimally discriminative fora given learning task. Several recent works have shown that better performance can be achievedwhen a feature extraction algorithm is jointly trained along with a classifier in an end-to-end fashion.For example, in Mairal et al. (2012); Bahrampour et al. (2016), dictionaries are trained jointlywith classifiers to extract discriminative sparse codes as feature. Recent successes of deep learningAll authors were with the Bosch Research and Technology Center at the time this work is done.1Under review as a conference paper at ICLR 2017methods (Goodfellow et al., 2016) on extracting discriminative features from raw data and achievingstate-of-the-art performance have boosted the effort for automatic feature discovery in several domainsincluding speech (Krizhevsky et al., 2012b), image (Krizhevsky et al., 2012a), and text (Sutskeveret al., 2014) data. In particular, it has been shown that recurrent neural networks (Elman, 1990) andits variants such as LSTMs (Hochreiter & Schmidhuber, 1997; Graves et al., 2013) are capable ofcapturing long-term time-dependency between input features and thus are well suited for featurediscovery from time-series data.While neural networks have also been used for event classification, these efforts have been mostlyfocused on either univariate signal (Hannun et al., 2014) or uniformly sampled multi-variate time-series data (Mirowski et al., 2009). In this paper, we focus on event classification task (and eventprediction task that can be reformulated as event classification), where the application data consistsof multi-variate, heterogeneous (categorical and continuous) and non-uniformly sampled time-seriesdata. This includes a wide variety of application domains such as sensory data for internet of things,health care, system logs from data center, etc. Following are the main contributions of the paper:We propose three representation learning algorithms for time-series classification. The pro-posed algorithms are formulated as embedding layers, which receive symbolized sequencesas their input. The embedding layer is then trained jointly with a deep learning architec-ture (such as convolutional or recurrent network) to automatically extract discriminatingrepresentations for the given classification task. The proposed algorithms differ in the waythey embed the symbolized data and are named as Word Embedding, Shared Character-wiseEmbedding, and Independent Character-wise Embedding.The deep learning architectures combined with the proposed algorithms provide a unifiedframework to handle heterogeneous time-series data which regularly occur in most sensordata mining applications. They uniformly map data of any type into a continuous space,which enables representation learning within the space. We will provide detailed discus-sions on the suitability of the proposed representations and their respective strengths andlimitations.We show that the proposed algorithms achieve state-of-the-art performance comparedto both a standard deep architecture without symbolization and also compared to otherclassification approaches with hand-engineered features from domain experts. This isshown with experimental results on three real-world applications including hard disk failureprediction, seizure prediction, and heating system fault prediction.2 S YMBOLIZATIONSymbolization has been widely used as first step for feature extraction on time-series data, providinga more compact representation and acting as a filter to remove noise. Moreover, symbolization can beused to deal with heterogeneity of the data where multi-variate time-series contain both categorical,ordinal, and continuous variables. For example, symbolized sequences are used in Bahrampour et al.(2013) to construct a probabilistic finite state automata and a measure on the corresponding statetransition matrix is then used as final feature which is fed into a classifier. However, this kind offeatures, which are extracted without explicitly optimizing for the given discriminative task, aretypically suboptimal and are not guaranteed to be discriminative. Moreover, incorporating symbol-based representations and jointly training a deep network is non-trivial. In this work, we propose aunified architecture to embed the symbolized sequence as an input representation and jointly traina deep network to learn discriminative features. Symbolization for a discrete variable is trivial asthe number of symbols is equal to the number of available categories. For continuous variables, thisrequires partitioning (also known as quantization) of data given an alphabet size (or the symbol set).The signal space for each continuous variable, approximated by training set, can be partitioned intoa finite number of cells that are labeled as symbols using a clustering algorithm such as uniformpartitioning, maximum entropy partitioning (Rajagopalan & Ray, 2006), or Jenks natural breaksalgorithm (Jenks, 1967). The alphabet size for continuous variable is a hyper-parameter which can betuned by observing empirical marginal distributions.Figures 1 and 2 illustrate the symbolization procedure in a simple example converting a synthetic 2dimensional time series fZ1; Z2ginto a sequence of representations. The histogram of continuousvariable Z1contains two Gaussian-like distributions and thus is partitioned into 2splits, i.e. for2Under review as a conference paper at ICLR 2017−2 0 2 4 6 8 10 120100200300400500600700800Split that separates two clustersFigure 1: Partitiong of continous variable Z1based on its histogram.time 0 1 2Z1Z22:1 11 5C5C2C1)aZ1bZ1aZ1eZ2bZ2aZ2%!&WdEvaevbbvaaSCEv1a+v2ev1b+v2bv1a+v2aICE(v1av2e)T(v1bv2b)T(v1av2a)TFigure 2: Heterogeneous time-series symbolization along with word embedding (WdE), sharedcharacter-wise embedding (SCE), and independent character-wise embedding (ICE).any realization of this variable in the time series, the value is replaced with symbol aZ1if it isless than 7, and with bZ1otherwise. For discrete variable Z2, assuming it has 5 categories Z22fC1; C2; C3; C4; C5g, we assign symbol aZ2toC1, symbol bZ2toC2, and so on.3 R EPRESENTATION LEARNINGIn this section, we propose three methods to learn representation from symbolized data.3.1 W ORD EMBEDDING (WDE)Symbolized sequences at each time-step can be used to form a word by orderly collecting the symbolrealizations of the variables. Thus, each time-series is represented by a sequence of words where eachword represents the state of the multi-variate input at a given time. In Figures 2, word embeddingvector (WdE) for word wis shown as vw. Each word of the symbolized sequence is considered as adescriptor of a “pattern” at a given time step. Even though the process of generating words ignoresdependency among variables, it is reasonable to hypothesize that as long as a dataset is large enoughand representative patterns occur frequently, an embedding layer along with a deep architectureshould be able to capture the dependencies among the “patterns”. The set of words on training dataconstruct a vocabulary. Rare words are excluded from the vocabulary and are all represented using aout-of-vocabulary (OOV) word. OOV word is also used to represent words in test set which are notpresent in training data.One natural choice for learning representation of the symbolized sequence is to learn embeddings ofthe words within the vocabulary. This is done by learning an embedding matrix 2Rdvwheredis the embedding size and vis the vocabulary size (including OOV word), similar to learningword embedding in a natural language processing task (Dai & Le, 2015). One difference is that allwords here have same length as the number of input variables. Each multi-variate sample is thusrepresented using a d-dimensional vector. It should be noted that the embedding matrix is learnedjointly along with the rest of the network to learn discriminative representations. It should also benoted that although the problem of having rare words in training data is somewhat addressed by usingOOV embedding vectors, this can limit the representation power if symbolization results in too manylow-frequency words. Therefore, the quality of learning with word embeddings highly depends onthe cardinality of the symbol set and the splits used for symbolization.3Under review as a conference paper at ICLR 2017Figure 3: Empirical Probability Density of an counitous variable Zis shown along with the corre-sponding independent character-wise embeddings. The vairable has 4symbols which are initiazed tomaintain the ordered information. During training, and after each gradient update, the representationsare sorted to enforce the ordered constraint.3.2 S HARED CHARACTER -WISE EMBEDDING (SCE)The proposed word-embedding representation can capture the relation among multiple input variablesgiven sufficient amount of training samples. However, as discussed in previous section, the proposedword-embedding representation learning needs careful selection of the alphabet size to avoid havingtoo many low-frequency words and thus is inherently implausible to use in applications where thenumber of input time-series are too large. In this section, we propose an alternative character-levelrepresentation, which we call Shared Character-wise Embedding (SCE), to address this limitationwhile still being able to capture the dependencies among the inputs.Instead of forming words at each time step, we use character embedding to represent each symboland each observation at a time step is represented by the sum of all embedding vectors at a giventime step. To formulate this, consider an m-dimensional time-series data where the symbol size forthei-th input is si. Leteil2Rsibe the one-hot representation for symbol lof the i-th input and vilbe the corresponding embedding vector. Also Let = [V1: : :Vm]2RdPisibe the embeddingmatrix where Vi2Rdsiis the collection of the embedding vectors for i-th input. Then, a giveninput sample x1; x2; :::; xmis represented asPiVieixi2Rd, where xiis the symbol realization ofthei-th input. See Figure 2 for an example of the embedding (SCE) generated using this proposedrepresentation. Since the representation of each word is constructed by summing the embeddings ofindividual characters, this method does not suffer from the unseen words issues.3.3 I NDEPENDENT CHARACTER -WISE EMBEDDING (ICE)Although SCE does not suffer from the low-frequency issue of WdE, both of these representations donot capture the ordinal information in the input time series. In this section, we propose an IndependentCharacter-wise Embedding (ICE) representation that maintains ordinal information for symbolizedcontinuous variables and categorical variables that have ordered information. To enforce the order con-straint, we embed each symbol with a scalar value. Each input i2f1; : : : ; mgis embedded indepen-dently and the resulting representation for a given sample x1; : : : ; xmis[V1e1x1: : :Vmemxm]T2Rmwhere xiis the symbol realization of the ith input and Viis a row vector consisting of embeddingscalars. The possible correlation among inputs are left to be captured using following layers after theembedding layer in the network. See Figure 2 for an example of generating embedding vector (ICE)using the proposed algorithm.The embedding scalars for each symbol is initialized to satisfy the ordered information and duringtraining we make sure that the learned representations satisfy the corresponding ordinal information,i.e. the embedding scalars of an ordinal variable are sorted after each gradient update. Figure 3illustrates this process.It should be noted that the embedding layer here hasPisiparameters to learn and thus is slimmercompared to dPisiparameters of the shared character-wise embedding proposed in previoussection. Both of the proposed character-wise representations have more compact embedding layerthan the word-embedding representation that has dvparameters as vocabulary size vis usuallylarge.4Under review as a conference paper at ICLR 20174 P REDICTION ARCHITECTURE4.1 F ORMULATION OF PREDICTION PROBLEMIn this section, we formulate the event prediction problem as a classification problem. Let X=fX1;X2; : : : ;XTgbe a time-ordered collections of observation sequences (or clips) collected overTtime steps, where Xt=fxt1; : : : ;xtNtgrepresents tth sequence consisting of Ntconsecutivemeasurements. As notation indicates, it is notassumed that the number of observations withineach time step is constant. Let fl1; l2; : : : ; l Tgbe the corresponding sequence labels for X, wherelt2f0;1gencodes presence of an event within the tth time step, i.e. lt= 1 indicates that afault event is observed within the period of time input sequence Xtis collected. We define targetlabels y=fy1; y2; : : : ; y Tgwhere yt= 1 if an event is observed in the next Ktime-steps, i.e.Pt+Kj=t+1lj>0, andyt= 0otherwise. In this formulation, Kindicates the prediction horizon andyt= 0indicates that no event is observed in the next Ktime-steps, refered to as monitor window inthis paper. The prediction task is then defined as predicting ytgiven input Xtand its correspondingpast measurements fXt1;Xt2; : : : ;X1g. Using the prediction labels y, the event predictionproblem on time series data is converted into a classic binary classification problem. Note thatalthough the proposed formulation can in theory utilize allthe past measurements for classification,we usually fix a window size of Mpast measurements to limit computational complexity. Forinstance, suppose that Xt’s are sensory data measurements of a physical system collected at the tthday of its operation and let K= 7andM= 3. Then the classification problem for Xtis to predictyt, i.e., whether an event is going to be observed in the next coming week of the physical systemoperation, given current and past three days of measurements.4.2 T EMPORAL WEIGHTING FUNCTIONIn rare event prediction tasks, the number of positive data samples, i.e. data corresponding tooccurrences of a target event, is much fewer than the one of negatives. If not taken care of, thisclass imbalance problem causes that the decision boundary of a classifier to be dragged towardthe data space where negative samples are distributed, artificially increasing the overall accuracywhile resulting in low detection rate. This is a classic problem in binary classification and it is acommon practice that larger misclassification cost are associated to positive samples to address thisissue (Bishop, 2001). However, simply assigning identical larger weights to positive samples forour prediction formulation cannot emphasize the importance of temporal data close to a target eventoccurrence. We hypothesize that the data collected closer to an event occurrence should be moreindicative of the upcoming error than data collected much earlier. Therefore, we design the followingweighting function to deal with the temporal importance:wt=PKj=1(Kj+ 1)lt+jifyt= 11 ifyt= 0(1)This weighting function gives relatively smaller weights to data far from event occurrences comparedto those which are closer. In addition to temporal importance emphasis, it also deals with overlappingevents. For example, suppose that two errors are observed at time samples t+ 1andt+ 3andprediction horizon Kis set to 5. Then input sample Xtis within the monitor windows of both eventsand thus its weight is set to higher value of wt= (51 + 1) + (53 + 1) = 8 as misclassificationin this day may result in missing to predict two events. By weighting data samples in this way, aclassifier is trained to adjust its decision boundary based on the importance information.The above weight definition deals with temporal importance information for event prediction. Wealso need to re-adjust weights to address the discussed class imbalance issue. After determining theweight using Eq. 1 for each training sample, we re-normalize all weights such that the total sum ofweights of positive samples becomes equal to the total sum of weights of negative samples.The weighted cross entropy loss function is used as the optimization criterion to find parameters forour model. For the given input Xtwith weight wt, target label yt, and the predicted label ^yt, the lossfunction is defined as :l(yt;^yt) =wt(ytlog^yt+ (1yt)log(1^yt)): (2)5Under review as a conference paper at ICLR 20174.3 N ETWORK ARCHITECTUREEach of the proposed embedding layers can be used as the first layer of a deep architecture. Theembedding layer along with the rest of the architecture are learned jointly to optimize a discriminativetask, similar to natural language processing tasks (Gal, 2015). Thus, the embedding layer is trained togenerate discriminative representations. The specific architectures are further discussed in the resultssection for each experiment.5 R ESULTS5.1 H ARD-DISK FAILURE PREDICTIONBackblaze data center has released its hard drive datasets containing daily snapshot S.M.A.R.T(Self-Monitoring, Analysis and Reporting Technology) statistics for each operational hard drive from2013 to June 2016. The data of each hard drive are recorded until it fails. In this paper, the 2015subset of the data on drive model “ST3000DM001” are used. As far as we know, no other predictionalgorithm has been published on the data set of this model and thus we have generated our owntraining and test split. The data consists of several models of hard drives. There are 59;340harddrives out of which 586of them (less than 1%) had failed. The data of the following 7columnsof S.M.A.R.T raw statistics are used: 5;183;184;187;188;193;197. These columns correspondsto accumulative count values of different monitored features. We also added absolute differencebetween count values of consecutive days for each input column resulting in overall 14columns.The data has missing values which are imputed using linear interpolation. The task is formulated topredict whether there is a failure in the next K= 3days given current and past 4 days data.The dataset is randomly split into a training set (containing 486positives) and a test set (containing100positives) using hard disk serial number and without loosing the temporal information. Thustraining and test set do not share any hard disk. For the experiment using word embedding (WdE),the data are symbolized with splits determined by observation of empirical histogram of everyvariable. The vocabulary is constructed using all words that have frequency of more than one. Theremaining rare words are all mapped to the OOV word resulting into a vocabulary size of 509. Forthe experiments using shared and independent character-wise embedding, dubbed as SCE and ICErespectively, partitioning is done using maximum entropy partitioning (Bahrampour et al., 2013)with the alphabet size of 4, i.e. the first split is at the first 25-th percentile, the second split is at the50-th percentile, and so on. The size of the embedding for WdE and SCE are selected as 16 and2, respectively, using cross validation. Each of the proposed embedding layers is then followed byan LSTM (Hochreiter & Schmidhuber, 1997) layer with 8 hidden units and a fully connected layerfor binary classification. Temporal weighting is not used here as it was not seem to be effective onthis dataset, but cost-sensitive formulation is used to deal with this imbalanced dataset. As baselinemethods, we also provided the results using logistic regression classification (LR), random forest(RF), and LSTM trained on normalized raw data (without symbolization). For LR and RF, the fivedays input data are concatenated to form the feature vector. The RF algorithm consists of 1000decision trees. The LSTM networks are trained using ADAM algorithm (Kingma & Ba, 2014) withdefault learning rate of 0:001. Tabel 1 summarizes the performance on test data set. We reported thebalanced accuracy, arithmetic mean of the true positive and true negative rates, the area under curve(AUC) of ROC as performance metrics. The balanced accuracy numbers are generated by picking athreshold on ROC that maximizes true prediction while maintaining a false positive rate of maximum0:05. As it is seen, the proposed character-level embedding algorithms result in best performances. Itshould be noted that the input data is summary statistics, and not raw time-series data, and thus asseen the LR and LSTM algorithms, without symbolization, perform reasonably well.5.2 S EIZURE PREDICTIONWe have also compared the performance of the proposed algorithms for seizure prediction. The dataset is from the Kaggle American Epilepsy Society Seizure Prediction Challenge 2014 and consistsof intracranial EEG (iEEG) clips from 16 channels collected from dogs and human. We used thedata collected from dogs in our experiments, not including data from “Dog _5" as the data fromone channel is missing. We generated the test sets from training data by randomly selecting 20%of one-hour clips. The length of each clip is 240;000. We have down-sampled them to 1;200forefficient processing using recurrent networks. Five-fold cross validation is used to report the results.6Under review as a conference paper at ICLR 2017Table 1: Performance comparison of fault prediction methods on Backblaze Reliability dataset.Base-line methods include random forest (RF), logistic regression classifier (LR), and LSTM, trainedwithout symbolization. The three proposed embedding representations on symbolized data areWdE-LSTM, SCE-LSTM, and ICE-LSTM.Models Balanced Accuracy AUC of ROCRF 0.803 0.804LR 0.846 0.851LSTM 0.832 0.865WdE-LSTM 0.834 0.812SCE-LSTM 0.855 0.841ICE-LSTM 0.835 0.893Table 2: Performance comparison of the iRNN on raw data as well as iRNN using the proposedcharacter-level embedding methods on symbolized data for seizure prediction.Models Balance Accuracy AUC of ROCiRNN 0.669 0.69SCE-iRNN 0.761 0.811ICE-iRNN 0.77 0.818Table 3: Performance comparison of LR and LSTM on hand-designed features as well as the resultsgenerated using LSTM using the proposed embedding methods on symbolized data for fault predictionon Thermo-technology dataset.Models Balance Accuracy AUC of ROCLR 0.685 0.716LSTM 0.735 0.766WdE-LSTM 0.729 0.759SCE-LSTM 0.7 0.697ICE-LSTM 0.733 0.769For the WdE algorithm, we observed that 90% of total words generated, using alphabet size of 4,have frequency of one which are all mapped to OOV and has resulted in poor performance. Wealso observed that using smaller alphabet size for WdE is not helpful resulting in too much loss ofinformation and thus the performance of WdE algorithm is not reported here. For character-levelembedding algorithms, maximum entropy partitioning is used with alphabet size of 50. The networkused here consists of a one-dimensional convolutional layer with 16filters, filter length of 3andstride of 1, followed by a pooling layer of size 2 and stride 2 and an iRNN layer (Le et al., 2015) with32hidden units. We observed better results using iRNN than LSTM and thus we have reported theperformance using iRNN. We have also reported the performance using same network on raw EEGdata. The performance of these methods are summarized in Tabel 2. As it is seen, the proposed ICEembedding resulted in the best performance.5.3 H EATING SYSTEM FAILURE PREDICTIONWe also applied our method on an internal large dataset containing sensor information from thermo-technology heating systems. This dataset contains 132;755time-series of 20variables where eachtime-series is data collected within one day. Nine of the variables are continuous and the remaining11variables are categorical. The task is to predict whether a heating system will have a failure incoming week. The dataset is highly imbalanced, where more than 99% of the data have no fault in thenext seven days. After symbolizing the training data, for the experiment of using word embedding,the words that have relative frequency of less than 1%are considered as OOV words. The averagedlength of each sequence is 6;000. The embedding dimension for WdE and SCE algorithm are bothchosen as 20 using a validation set. The network architecture includes an LSTM layer with 15 hidden7Under review as a conference paper at ICLR 2017units and a fully connected layer of dimension 50 which is followed by the final fully connected layerfor binary classification. The same model architecture was used for the experiments of shared andindependent character-wise embedding.A simple trick is used to increase the use of GPU parallel computing power during the trainingphase, due to the large size of the training samples. For a given training time-series with Twords,instead of sequentially feeding entire samples to the network, time-series is first divided into Msub-sequences of maximum lengthTMwhere each of these sub-sequences are processed independentlyand in-parallel. A max-pooling layer is then used on these feature vectors to get the final featurevector which represents the entire time-series. The generated feature is fed into a feed-forward neuralnetwork with sigmoid activation function to generate the predictions for the binary classificationtask. We call the sequence division technique as sequence chopping. Even though training an LSTMwith this technique sacrifices temporal dependency longer thanTMtime steps, we have observedthat by selecting a suitable chopping size, we can achieve competitive results and at the same timesignificant speed-up of the training procedure. The performance of the model having a similar LSTMarchitecture which is trained on 119 hand-engineered features is reported in Table 3. It shouldbe noted that the hand engineered features have evolved over years of domain expertise and assetmonitoring in the field. The results indicates that our methods results in competitive performancewithout the need for the costly process of hand-designing features.6 CONCLUSIONSWe proposed three embedding algorithms on symbolized input sequence, namely WdE, SCE, andICE, for event classification on heterogeneous time-series data. The proposed methods enable feedingsymbolized time-series directly into a deep network and learn a discriminative representation in anend-to-end fashion which is optimized for the given task. The experimental results on three real-worlddatasets demonstrate the effectiveness of the proposed algorithms, removing the need to performcostly and sub-optimal process of hand-engineering features for time-series classification.REFERENCESJürgen Altmann. Acoustic and seismic signals of heavy military vehicles for co-operative verification.Journal of Sound and Vibration , 273(4):713–740, 2004.Soheil Bahrampour, Asok Ray, Soumalya Sarkar, Thyagaraju Damarla, and Nasser M. Nasrabadi.Performance comparison of feature extraction algorithms for target detection and classification.Pattern Recognition Letters , 34(16):2126 – 2134, 2013.Soheil Bahrampour, Nasser M Nasrabadi, Asok Ray, and William Kenneth Jenkins. Multimodaltask-driven dictionary learning for image classification. IEEE Trans. on Image Processing , 25(1):24–38, 2016.CM Bishop. Bishop pattern recognition and machine learning, 2001.Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in NeuralInformation Processing Systems , pp. 3079–3087, 2015.Jeffrey L Elman. Finding structure in time. Cognitive science , 14(2):179–211, 1990.Y . Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv ,2015.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MITPress, 2016. URL http://www.deeplearningbook.org .Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In 2013 IEEE international conference on acoustics, speech and signal processing ,pp. 6645–6649. IEEE, 2013.Shalabh Gupta and Asok Ray. Symbolic dynamic filtering for data-driven pattern recognition. Patternrecognition: theory and application , pp. 17–71, 2007.8Under review as a conference paper at ICLR 2017Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-endspeech recognition. arXiv preprint arXiv:1412.5567 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Ke Huang and Selin Aviyente. Sparse representation for signal classification. In Advances in neuralinformation processing systems , pp. 609–616, 2006.N. S. Jayant and Peter Noll. Digital Coding of Waveforms, Principles and Applications to Speechand Video , pp. 688. Prentice-Hall, 1984.George F. Jenks. The data model concept in statistical mapping. International yearbook of cartogra-phy, 7(1), 1967.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012a.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012b.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks ofrectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Dan Li, Kerry D Wong, Yu Hen Hu, and Akbar M Sayeed. Detection, classification, and tracking oftargets. IEEE signal processing magazine , 19(2):17–29, 2002.Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou. A hand gesture recognition frameworkand wearable gesture-based interaction prototype for mobile devices. IEEE Transactions onHuman-Machine Systems , 44(2):293–299, 2014.Julien Mairal, Francis Bach, and Jean Ponce. Task-driven dictionary learning. IEEE Transactions onPattern Analysis and Machine Intelligence , 34(4):791–804, 2012.Ingo Mierswa and Katharina Morik. Automatic feature extraction for classifying audio data. Machinelearning , 58(2-3):127–149, 2005.Piotr Mirowski, Deepak Madhavan, Yann LeCun, and Ruben Kuzniecky. Classification of patterns ofeeg synchronization for seizure prediction. Clinical neurophysiology , 120(11):1927–1940, 2009.Joseph F Murray, Gordon F Hughes, and Kenneth Kreutz-Delgado. Machine learning methods forpredicting failures in hard drives: A multiple-instance application. Journal of Machine LearningResearch , 6(May):783–816, 2005.Venkatesh Rajagopalan and Asok Ray. Symbolic time series analysis via wavelet-based partitioning.Signal Processing , 86(11):3309 – 3320, 2006.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.9Under review as a conference paper at ICLR 20177 A PPENDIXTable 4: Number of trainable parameters for the three proposed embedding layers as well as totalparameters of the network used in the three studied application. For the WdE on heating systemdata, an approximate number is provided as the vocabulary size is dependent on the training set usedamong the three cross-validations splits.Hard-Disk data Seizure data Heating System dataMethod#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparametersWdE 8144 8953 N/A N/A 713000715000SCE 112 501 2400 3569 1335 4046ICE 56 815 80 2465 69 310010
HkVGxmGVx
Hy8X3aKee
ICLR.cc/2017/conference/-/paper150/official/review
{"title": "Unique angle for modeling heterogeneous sequences", "rating": "3: Clear rejection", "review": "Because the authors provided no further responses to reviewer feedback, I maintained my original review score.\n\n-----\n\nThis paper takes a unique approach to the modeling of heterogeneous sequence data. They first symbolize continuous inputs using a previously described approach (histograms or maximum entropy), the result being a multichannel discrete sequence (of symbolized time series or originally categorical data) of \"characters.\" They then investigate three different approaches to learning an embedding of the characters at each time step (which can be thought of as a \"word\"):\n1) Concatenate characters into a \"word\" and then apply standard lookup-based embeddings from language modeling (WDE)\n2) Embed each character independently and then sum over the embeddings (SCE)\n3) Embed each character as a scalar and concatenate the scalar embeddings (ICE)\nThe resulting embeddings can be used as inputs to any architecture, e.g., LSTM. The paper applies these methods primarily to event detection tasks, such as hard drive failures and seizures in EEG data. Empirical results largely suggest the a recurrent model combined with symbolization/embedding outperforms a comparable recurrent model applied to raw data. Results are inconclusive as to which embedding layer works best.\n\nStrengths:\n- The different embedding approaches, while simple, are designed to tackle a very interesting problem where the input consists of multivariate discrete sequences, which makes it different from standard language modeling and related domains. The proposed approaches offer several different interesting perspectives on how to approach this problem.\n- The empirical results suggest that symbolizing the continuous input space can improve results for some problems. This is an interesting possibility as it enables the direct application of a variety of language modeling tools (e.g., embeddings).\n\nWeaknesses:\n- The LSTMs (one layer each of 8, 16, and 15 cells, respectively) used in the three experiments sound *very* under capacity given the complexity of the tasks and the sizes of the data sets (tens to hundreds of thousands of sequences). That might explain both the relatively small gap between the LSTMs and logistic regression *and* the improvement of the embedding-based LSTMs. Hypothetically, if quantizing the inputs is really useful, the raw data LSTMs should be able to learn this transformation, but if they are under capacity, they might not be able to dos. What is more, using the same architecture (# layers, # units, etc.) for very different kinds of inputs (raw, WdE, SCE, ICE, hand-engineered features) is poor methodology. Obviously, hyperparameters should be tuned independently for each type of input.\n- The experiments omit obvious baselines, such as trying to directly learn an embedding of the continuous inputs.\n- The experimental results offer an incomplete, mixed conclusion. First, no one embedding approach performs best across all tasks and metrics, and the authors offer no insights into why this might be. Second, the current set of experiments are not sufficiently thorough to conclude that quantization and embedding is superior to working with the raw data.\n- The temporal weighting section appears out of place: it is unrelated to the core of the paper (quantizing and embedding continuous inputs), and there are no experiments to demonstrate its impact on performance.\n- The paper omits a large number of related works: anything by Eamonn Keogh's lab (e.g., Symbolic Aggregate approXimation or SAX), work on modifying loss functions for RNN classifiers (Dai and Le. Semi-supervised sequence learning. NIPS 2015; Lipton and Kale, et al. Learning to Diagnose with LSTM Recurrent Neural Networks. ICLR 2016), work on embedding non-traditional discrete sequential inputs (Choi, et al. Multi-layer Representation Learning for Medical Concepts. KDD 2016).\n\nThis is an interesting direction for research on time series modeling with neural nets, and the current work is a good first step. The authors need to perform more thorough experiments to test their hypotheses (i.e., that embedding helps performance). My intuition is that a continuous embedding layer + proper hyperparameter tuning will work just as well. If quantization proves to be beneficial, then I encourage them to pursue some direction that eliminates the need for ad hoc quantization, perhaps some kind of differentiable clustering layer?", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
["Shengdong Zhang", "Soheil Bahrampour", "Naveen Ramakrishnan", "Mohak Shah"]
In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.
["heterogeneous", "classification", "problem", "variables", "data", "approaches", "deep symbolic representation", "event classification", "time series data"]
https://openreview.net/forum?id=Hy8X3aKee
https://openreview.net/pdf?id=Hy8X3aKee
https://openreview.net/forum?id=Hy8X3aKee&noteId=HkVGxmGVx
Under review as a conference paper at ICLR 2017DEEPSYMBOLIC REPRESENTATION LEARNING FORHETEROGENEOUS TIME-SERIES CLASSIFICATIONShengdong Zhang1;2, Soheil Bahrampour1, Naveen Ramakrishnan1, Mohak Shah1;31Bosch Research and Technology Center, Palo Alto, CA2Simon Fraser University, Burnaby, BC3University of Illinois at Chicago, Chicago, ILsza75@sfu.ca, Soheil.Bahrampour@us.bosch.com ,Naveen.Ramakrishnan@us.bosch.com, Mohak.Shah@us.bosch.comABSTRACTIn this paper, we consider the problem of event classification with multi-variatetime series data consisting of heterogeneous (continuous and categorical) variables.The complex temporal dependencies between the variables combined with sparsityof the data makes the event classification problem particularly challenging. Moststate-of-art approaches address this either by designing hand-engineered features orbreaking up the problem over homogeneous variates. In this work, we propose andcompare three representation learning algorithms over symbolized sequences whichenables classification of heterogeneous time-series data using a deep architecture.The proposed representations are trained jointly along with the rest of the networkarchitecture in an end-to-end fashion that makes the learned features discriminativefor the given task. Experiments on three real-world datasets demonstrate theeffectiveness of the proposed approaches.1 I NTRODUCTIONRapid increase in connectivity of physical sensors and systems to the Internet is enabling largescale collection of time series data and system logs. Such temporal datasets enable applications likepredictive maintenance, service optimizations and efficiency improvements for physical assets. Atthe same time, these datasets also pose interesting research challenges such as complex dependenciesand heterogeneous nature of variables, non-uniform sampling of variables, sparsity, etc which furthercomplicates the process of feature extraction for data mining tasks. Moreover, the high dependenceof the feature extraction process on domain expertise makes the development of new data miningapplications cumbersome. This paper proposes a novel approach for feature discovery specifically fortemporal event classification problems such as failure prediction for heating systems.Feature extraction from time-series data for classification has been long studied (Mierswa & Morik,2005). For example, well-known Crest factor (Jayant & Noll, 1984) and Kurtosis method (Altmann,2004) extract statistical measures of the amplitude of time-series sensory data. Other popularalgorithms include feature extraction using frequency domain methods, such as power spectraldensity (Li et al., 2002), or time-frequency domain such as wavelet coefficients (Lu et al., 2014). Morerecent methods include wavelet synchrony (Mirowski et al., 2009), symbolic dynamic filtering (Gupta& Ray, 2007; Bahrampour et al., 2013) and sparse coding (Huang & Aviyente, 2006; Bahrampouret al., 2013). On the other hand, summary statistics such as count, occurrence rate, and duration havebeen used as features for event data (Murray et al., 2005).These feature extraction algorithms are usually performed as a pre-processing step before traininga classifier on the extracted features and thus are not guaranteed to be optimally discriminative fora given learning task. Several recent works have shown that better performance can be achievedwhen a feature extraction algorithm is jointly trained along with a classifier in an end-to-end fashion.For example, in Mairal et al. (2012); Bahrampour et al. (2016), dictionaries are trained jointlywith classifiers to extract discriminative sparse codes as feature. Recent successes of deep learningAll authors were with the Bosch Research and Technology Center at the time this work is done.1Under review as a conference paper at ICLR 2017methods (Goodfellow et al., 2016) on extracting discriminative features from raw data and achievingstate-of-the-art performance have boosted the effort for automatic feature discovery in several domainsincluding speech (Krizhevsky et al., 2012b), image (Krizhevsky et al., 2012a), and text (Sutskeveret al., 2014) data. In particular, it has been shown that recurrent neural networks (Elman, 1990) andits variants such as LSTMs (Hochreiter & Schmidhuber, 1997; Graves et al., 2013) are capable ofcapturing long-term time-dependency between input features and thus are well suited for featurediscovery from time-series data.While neural networks have also been used for event classification, these efforts have been mostlyfocused on either univariate signal (Hannun et al., 2014) or uniformly sampled multi-variate time-series data (Mirowski et al., 2009). In this paper, we focus on event classification task (and eventprediction task that can be reformulated as event classification), where the application data consistsof multi-variate, heterogeneous (categorical and continuous) and non-uniformly sampled time-seriesdata. This includes a wide variety of application domains such as sensory data for internet of things,health care, system logs from data center, etc. Following are the main contributions of the paper:We propose three representation learning algorithms for time-series classification. The pro-posed algorithms are formulated as embedding layers, which receive symbolized sequencesas their input. The embedding layer is then trained jointly with a deep learning architec-ture (such as convolutional or recurrent network) to automatically extract discriminatingrepresentations for the given classification task. The proposed algorithms differ in the waythey embed the symbolized data and are named as Word Embedding, Shared Character-wiseEmbedding, and Independent Character-wise Embedding.The deep learning architectures combined with the proposed algorithms provide a unifiedframework to handle heterogeneous time-series data which regularly occur in most sensordata mining applications. They uniformly map data of any type into a continuous space,which enables representation learning within the space. We will provide detailed discus-sions on the suitability of the proposed representations and their respective strengths andlimitations.We show that the proposed algorithms achieve state-of-the-art performance comparedto both a standard deep architecture without symbolization and also compared to otherclassification approaches with hand-engineered features from domain experts. This isshown with experimental results on three real-world applications including hard disk failureprediction, seizure prediction, and heating system fault prediction.2 S YMBOLIZATIONSymbolization has been widely used as first step for feature extraction on time-series data, providinga more compact representation and acting as a filter to remove noise. Moreover, symbolization can beused to deal with heterogeneity of the data where multi-variate time-series contain both categorical,ordinal, and continuous variables. For example, symbolized sequences are used in Bahrampour et al.(2013) to construct a probabilistic finite state automata and a measure on the corresponding statetransition matrix is then used as final feature which is fed into a classifier. However, this kind offeatures, which are extracted without explicitly optimizing for the given discriminative task, aretypically suboptimal and are not guaranteed to be discriminative. Moreover, incorporating symbol-based representations and jointly training a deep network is non-trivial. In this work, we propose aunified architecture to embed the symbolized sequence as an input representation and jointly traina deep network to learn discriminative features. Symbolization for a discrete variable is trivial asthe number of symbols is equal to the number of available categories. For continuous variables, thisrequires partitioning (also known as quantization) of data given an alphabet size (or the symbol set).The signal space for each continuous variable, approximated by training set, can be partitioned intoa finite number of cells that are labeled as symbols using a clustering algorithm such as uniformpartitioning, maximum entropy partitioning (Rajagopalan & Ray, 2006), or Jenks natural breaksalgorithm (Jenks, 1967). The alphabet size for continuous variable is a hyper-parameter which can betuned by observing empirical marginal distributions.Figures 1 and 2 illustrate the symbolization procedure in a simple example converting a synthetic 2dimensional time series fZ1; Z2ginto a sequence of representations. The histogram of continuousvariable Z1contains two Gaussian-like distributions and thus is partitioned into 2splits, i.e. for2Under review as a conference paper at ICLR 2017−2 0 2 4 6 8 10 120100200300400500600700800Split that separates two clustersFigure 1: Partitiong of continous variable Z1based on its histogram.time 0 1 2Z1Z22:1 11 5C5C2C1)aZ1bZ1aZ1eZ2bZ2aZ2%!&WdEvaevbbvaaSCEv1a+v2ev1b+v2bv1a+v2aICE(v1av2e)T(v1bv2b)T(v1av2a)TFigure 2: Heterogeneous time-series symbolization along with word embedding (WdE), sharedcharacter-wise embedding (SCE), and independent character-wise embedding (ICE).any realization of this variable in the time series, the value is replaced with symbol aZ1if it isless than 7, and with bZ1otherwise. For discrete variable Z2, assuming it has 5 categories Z22fC1; C2; C3; C4; C5g, we assign symbol aZ2toC1, symbol bZ2toC2, and so on.3 R EPRESENTATION LEARNINGIn this section, we propose three methods to learn representation from symbolized data.3.1 W ORD EMBEDDING (WDE)Symbolized sequences at each time-step can be used to form a word by orderly collecting the symbolrealizations of the variables. Thus, each time-series is represented by a sequence of words where eachword represents the state of the multi-variate input at a given time. In Figures 2, word embeddingvector (WdE) for word wis shown as vw. Each word of the symbolized sequence is considered as adescriptor of a “pattern” at a given time step. Even though the process of generating words ignoresdependency among variables, it is reasonable to hypothesize that as long as a dataset is large enoughand representative patterns occur frequently, an embedding layer along with a deep architectureshould be able to capture the dependencies among the “patterns”. The set of words on training dataconstruct a vocabulary. Rare words are excluded from the vocabulary and are all represented using aout-of-vocabulary (OOV) word. OOV word is also used to represent words in test set which are notpresent in training data.One natural choice for learning representation of the symbolized sequence is to learn embeddings ofthe words within the vocabulary. This is done by learning an embedding matrix 2Rdvwheredis the embedding size and vis the vocabulary size (including OOV word), similar to learningword embedding in a natural language processing task (Dai & Le, 2015). One difference is that allwords here have same length as the number of input variables. Each multi-variate sample is thusrepresented using a d-dimensional vector. It should be noted that the embedding matrix is learnedjointly along with the rest of the network to learn discriminative representations. It should also benoted that although the problem of having rare words in training data is somewhat addressed by usingOOV embedding vectors, this can limit the representation power if symbolization results in too manylow-frequency words. Therefore, the quality of learning with word embeddings highly depends onthe cardinality of the symbol set and the splits used for symbolization.3Under review as a conference paper at ICLR 2017Figure 3: Empirical Probability Density of an counitous variable Zis shown along with the corre-sponding independent character-wise embeddings. The vairable has 4symbols which are initiazed tomaintain the ordered information. During training, and after each gradient update, the representationsare sorted to enforce the ordered constraint.3.2 S HARED CHARACTER -WISE EMBEDDING (SCE)The proposed word-embedding representation can capture the relation among multiple input variablesgiven sufficient amount of training samples. However, as discussed in previous section, the proposedword-embedding representation learning needs careful selection of the alphabet size to avoid havingtoo many low-frequency words and thus is inherently implausible to use in applications where thenumber of input time-series are too large. In this section, we propose an alternative character-levelrepresentation, which we call Shared Character-wise Embedding (SCE), to address this limitationwhile still being able to capture the dependencies among the inputs.Instead of forming words at each time step, we use character embedding to represent each symboland each observation at a time step is represented by the sum of all embedding vectors at a giventime step. To formulate this, consider an m-dimensional time-series data where the symbol size forthei-th input is si. Leteil2Rsibe the one-hot representation for symbol lof the i-th input and vilbe the corresponding embedding vector. Also Let = [V1: : :Vm]2RdPisibe the embeddingmatrix where Vi2Rdsiis the collection of the embedding vectors for i-th input. Then, a giveninput sample x1; x2; :::; xmis represented asPiVieixi2Rd, where xiis the symbol realization ofthei-th input. See Figure 2 for an example of the embedding (SCE) generated using this proposedrepresentation. Since the representation of each word is constructed by summing the embeddings ofindividual characters, this method does not suffer from the unseen words issues.3.3 I NDEPENDENT CHARACTER -WISE EMBEDDING (ICE)Although SCE does not suffer from the low-frequency issue of WdE, both of these representations donot capture the ordinal information in the input time series. In this section, we propose an IndependentCharacter-wise Embedding (ICE) representation that maintains ordinal information for symbolizedcontinuous variables and categorical variables that have ordered information. To enforce the order con-straint, we embed each symbol with a scalar value. Each input i2f1; : : : ; mgis embedded indepen-dently and the resulting representation for a given sample x1; : : : ; xmis[V1e1x1: : :Vmemxm]T2Rmwhere xiis the symbol realization of the ith input and Viis a row vector consisting of embeddingscalars. The possible correlation among inputs are left to be captured using following layers after theembedding layer in the network. See Figure 2 for an example of generating embedding vector (ICE)using the proposed algorithm.The embedding scalars for each symbol is initialized to satisfy the ordered information and duringtraining we make sure that the learned representations satisfy the corresponding ordinal information,i.e. the embedding scalars of an ordinal variable are sorted after each gradient update. Figure 3illustrates this process.It should be noted that the embedding layer here hasPisiparameters to learn and thus is slimmercompared to dPisiparameters of the shared character-wise embedding proposed in previoussection. Both of the proposed character-wise representations have more compact embedding layerthan the word-embedding representation that has dvparameters as vocabulary size vis usuallylarge.4Under review as a conference paper at ICLR 20174 P REDICTION ARCHITECTURE4.1 F ORMULATION OF PREDICTION PROBLEMIn this section, we formulate the event prediction problem as a classification problem. Let X=fX1;X2; : : : ;XTgbe a time-ordered collections of observation sequences (or clips) collected overTtime steps, where Xt=fxt1; : : : ;xtNtgrepresents tth sequence consisting of Ntconsecutivemeasurements. As notation indicates, it is notassumed that the number of observations withineach time step is constant. Let fl1; l2; : : : ; l Tgbe the corresponding sequence labels for X, wherelt2f0;1gencodes presence of an event within the tth time step, i.e. lt= 1 indicates that afault event is observed within the period of time input sequence Xtis collected. We define targetlabels y=fy1; y2; : : : ; y Tgwhere yt= 1 if an event is observed in the next Ktime-steps, i.e.Pt+Kj=t+1lj>0, andyt= 0otherwise. In this formulation, Kindicates the prediction horizon andyt= 0indicates that no event is observed in the next Ktime-steps, refered to as monitor window inthis paper. The prediction task is then defined as predicting ytgiven input Xtand its correspondingpast measurements fXt1;Xt2; : : : ;X1g. Using the prediction labels y, the event predictionproblem on time series data is converted into a classic binary classification problem. Note thatalthough the proposed formulation can in theory utilize allthe past measurements for classification,we usually fix a window size of Mpast measurements to limit computational complexity. Forinstance, suppose that Xt’s are sensory data measurements of a physical system collected at the tthday of its operation and let K= 7andM= 3. Then the classification problem for Xtis to predictyt, i.e., whether an event is going to be observed in the next coming week of the physical systemoperation, given current and past three days of measurements.4.2 T EMPORAL WEIGHTING FUNCTIONIn rare event prediction tasks, the number of positive data samples, i.e. data corresponding tooccurrences of a target event, is much fewer than the one of negatives. If not taken care of, thisclass imbalance problem causes that the decision boundary of a classifier to be dragged towardthe data space where negative samples are distributed, artificially increasing the overall accuracywhile resulting in low detection rate. This is a classic problem in binary classification and it is acommon practice that larger misclassification cost are associated to positive samples to address thisissue (Bishop, 2001). However, simply assigning identical larger weights to positive samples forour prediction formulation cannot emphasize the importance of temporal data close to a target eventoccurrence. We hypothesize that the data collected closer to an event occurrence should be moreindicative of the upcoming error than data collected much earlier. Therefore, we design the followingweighting function to deal with the temporal importance:wt=PKj=1(Kj+ 1)lt+jifyt= 11 ifyt= 0(1)This weighting function gives relatively smaller weights to data far from event occurrences comparedto those which are closer. In addition to temporal importance emphasis, it also deals with overlappingevents. For example, suppose that two errors are observed at time samples t+ 1andt+ 3andprediction horizon Kis set to 5. Then input sample Xtis within the monitor windows of both eventsand thus its weight is set to higher value of wt= (51 + 1) + (53 + 1) = 8 as misclassificationin this day may result in missing to predict two events. By weighting data samples in this way, aclassifier is trained to adjust its decision boundary based on the importance information.The above weight definition deals with temporal importance information for event prediction. Wealso need to re-adjust weights to address the discussed class imbalance issue. After determining theweight using Eq. 1 for each training sample, we re-normalize all weights such that the total sum ofweights of positive samples becomes equal to the total sum of weights of negative samples.The weighted cross entropy loss function is used as the optimization criterion to find parameters forour model. For the given input Xtwith weight wt, target label yt, and the predicted label ^yt, the lossfunction is defined as :l(yt;^yt) =wt(ytlog^yt+ (1yt)log(1^yt)): (2)5Under review as a conference paper at ICLR 20174.3 N ETWORK ARCHITECTUREEach of the proposed embedding layers can be used as the first layer of a deep architecture. Theembedding layer along with the rest of the architecture are learned jointly to optimize a discriminativetask, similar to natural language processing tasks (Gal, 2015). Thus, the embedding layer is trained togenerate discriminative representations. The specific architectures are further discussed in the resultssection for each experiment.5 R ESULTS5.1 H ARD-DISK FAILURE PREDICTIONBackblaze data center has released its hard drive datasets containing daily snapshot S.M.A.R.T(Self-Monitoring, Analysis and Reporting Technology) statistics for each operational hard drive from2013 to June 2016. The data of each hard drive are recorded until it fails. In this paper, the 2015subset of the data on drive model “ST3000DM001” are used. As far as we know, no other predictionalgorithm has been published on the data set of this model and thus we have generated our owntraining and test split. The data consists of several models of hard drives. There are 59;340harddrives out of which 586of them (less than 1%) had failed. The data of the following 7columnsof S.M.A.R.T raw statistics are used: 5;183;184;187;188;193;197. These columns correspondsto accumulative count values of different monitored features. We also added absolute differencebetween count values of consecutive days for each input column resulting in overall 14columns.The data has missing values which are imputed using linear interpolation. The task is formulated topredict whether there is a failure in the next K= 3days given current and past 4 days data.The dataset is randomly split into a training set (containing 486positives) and a test set (containing100positives) using hard disk serial number and without loosing the temporal information. Thustraining and test set do not share any hard disk. For the experiment using word embedding (WdE),the data are symbolized with splits determined by observation of empirical histogram of everyvariable. The vocabulary is constructed using all words that have frequency of more than one. Theremaining rare words are all mapped to the OOV word resulting into a vocabulary size of 509. Forthe experiments using shared and independent character-wise embedding, dubbed as SCE and ICErespectively, partitioning is done using maximum entropy partitioning (Bahrampour et al., 2013)with the alphabet size of 4, i.e. the first split is at the first 25-th percentile, the second split is at the50-th percentile, and so on. The size of the embedding for WdE and SCE are selected as 16 and2, respectively, using cross validation. Each of the proposed embedding layers is then followed byan LSTM (Hochreiter & Schmidhuber, 1997) layer with 8 hidden units and a fully connected layerfor binary classification. Temporal weighting is not used here as it was not seem to be effective onthis dataset, but cost-sensitive formulation is used to deal with this imbalanced dataset. As baselinemethods, we also provided the results using logistic regression classification (LR), random forest(RF), and LSTM trained on normalized raw data (without symbolization). For LR and RF, the fivedays input data are concatenated to form the feature vector. The RF algorithm consists of 1000decision trees. The LSTM networks are trained using ADAM algorithm (Kingma & Ba, 2014) withdefault learning rate of 0:001. Tabel 1 summarizes the performance on test data set. We reported thebalanced accuracy, arithmetic mean of the true positive and true negative rates, the area under curve(AUC) of ROC as performance metrics. The balanced accuracy numbers are generated by picking athreshold on ROC that maximizes true prediction while maintaining a false positive rate of maximum0:05. As it is seen, the proposed character-level embedding algorithms result in best performances. Itshould be noted that the input data is summary statistics, and not raw time-series data, and thus asseen the LR and LSTM algorithms, without symbolization, perform reasonably well.5.2 S EIZURE PREDICTIONWe have also compared the performance of the proposed algorithms for seizure prediction. The dataset is from the Kaggle American Epilepsy Society Seizure Prediction Challenge 2014 and consistsof intracranial EEG (iEEG) clips from 16 channels collected from dogs and human. We used thedata collected from dogs in our experiments, not including data from “Dog _5" as the data fromone channel is missing. We generated the test sets from training data by randomly selecting 20%of one-hour clips. The length of each clip is 240;000. We have down-sampled them to 1;200forefficient processing using recurrent networks. Five-fold cross validation is used to report the results.6Under review as a conference paper at ICLR 2017Table 1: Performance comparison of fault prediction methods on Backblaze Reliability dataset.Base-line methods include random forest (RF), logistic regression classifier (LR), and LSTM, trainedwithout symbolization. The three proposed embedding representations on symbolized data areWdE-LSTM, SCE-LSTM, and ICE-LSTM.Models Balanced Accuracy AUC of ROCRF 0.803 0.804LR 0.846 0.851LSTM 0.832 0.865WdE-LSTM 0.834 0.812SCE-LSTM 0.855 0.841ICE-LSTM 0.835 0.893Table 2: Performance comparison of the iRNN on raw data as well as iRNN using the proposedcharacter-level embedding methods on symbolized data for seizure prediction.Models Balance Accuracy AUC of ROCiRNN 0.669 0.69SCE-iRNN 0.761 0.811ICE-iRNN 0.77 0.818Table 3: Performance comparison of LR and LSTM on hand-designed features as well as the resultsgenerated using LSTM using the proposed embedding methods on symbolized data for fault predictionon Thermo-technology dataset.Models Balance Accuracy AUC of ROCLR 0.685 0.716LSTM 0.735 0.766WdE-LSTM 0.729 0.759SCE-LSTM 0.7 0.697ICE-LSTM 0.733 0.769For the WdE algorithm, we observed that 90% of total words generated, using alphabet size of 4,have frequency of one which are all mapped to OOV and has resulted in poor performance. Wealso observed that using smaller alphabet size for WdE is not helpful resulting in too much loss ofinformation and thus the performance of WdE algorithm is not reported here. For character-levelembedding algorithms, maximum entropy partitioning is used with alphabet size of 50. The networkused here consists of a one-dimensional convolutional layer with 16filters, filter length of 3andstride of 1, followed by a pooling layer of size 2 and stride 2 and an iRNN layer (Le et al., 2015) with32hidden units. We observed better results using iRNN than LSTM and thus we have reported theperformance using iRNN. We have also reported the performance using same network on raw EEGdata. The performance of these methods are summarized in Tabel 2. As it is seen, the proposed ICEembedding resulted in the best performance.5.3 H EATING SYSTEM FAILURE PREDICTIONWe also applied our method on an internal large dataset containing sensor information from thermo-technology heating systems. This dataset contains 132;755time-series of 20variables where eachtime-series is data collected within one day. Nine of the variables are continuous and the remaining11variables are categorical. The task is to predict whether a heating system will have a failure incoming week. The dataset is highly imbalanced, where more than 99% of the data have no fault in thenext seven days. After symbolizing the training data, for the experiment of using word embedding,the words that have relative frequency of less than 1%are considered as OOV words. The averagedlength of each sequence is 6;000. The embedding dimension for WdE and SCE algorithm are bothchosen as 20 using a validation set. The network architecture includes an LSTM layer with 15 hidden7Under review as a conference paper at ICLR 2017units and a fully connected layer of dimension 50 which is followed by the final fully connected layerfor binary classification. The same model architecture was used for the experiments of shared andindependent character-wise embedding.A simple trick is used to increase the use of GPU parallel computing power during the trainingphase, due to the large size of the training samples. For a given training time-series with Twords,instead of sequentially feeding entire samples to the network, time-series is first divided into Msub-sequences of maximum lengthTMwhere each of these sub-sequences are processed independentlyand in-parallel. A max-pooling layer is then used on these feature vectors to get the final featurevector which represents the entire time-series. The generated feature is fed into a feed-forward neuralnetwork with sigmoid activation function to generate the predictions for the binary classificationtask. We call the sequence division technique as sequence chopping. Even though training an LSTMwith this technique sacrifices temporal dependency longer thanTMtime steps, we have observedthat by selecting a suitable chopping size, we can achieve competitive results and at the same timesignificant speed-up of the training procedure. The performance of the model having a similar LSTMarchitecture which is trained on 119 hand-engineered features is reported in Table 3. It shouldbe noted that the hand engineered features have evolved over years of domain expertise and assetmonitoring in the field. The results indicates that our methods results in competitive performancewithout the need for the costly process of hand-designing features.6 CONCLUSIONSWe proposed three embedding algorithms on symbolized input sequence, namely WdE, SCE, andICE, for event classification on heterogeneous time-series data. The proposed methods enable feedingsymbolized time-series directly into a deep network and learn a discriminative representation in anend-to-end fashion which is optimized for the given task. The experimental results on three real-worlddatasets demonstrate the effectiveness of the proposed algorithms, removing the need to performcostly and sub-optimal process of hand-engineering features for time-series classification.REFERENCESJürgen Altmann. Acoustic and seismic signals of heavy military vehicles for co-operative verification.Journal of Sound and Vibration , 273(4):713–740, 2004.Soheil Bahrampour, Asok Ray, Soumalya Sarkar, Thyagaraju Damarla, and Nasser M. Nasrabadi.Performance comparison of feature extraction algorithms for target detection and classification.Pattern Recognition Letters , 34(16):2126 – 2134, 2013.Soheil Bahrampour, Nasser M Nasrabadi, Asok Ray, and William Kenneth Jenkins. Multimodaltask-driven dictionary learning for image classification. IEEE Trans. on Image Processing , 25(1):24–38, 2016.CM Bishop. Bishop pattern recognition and machine learning, 2001.Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in NeuralInformation Processing Systems , pp. 3079–3087, 2015.Jeffrey L Elman. Finding structure in time. Cognitive science , 14(2):179–211, 1990.Y . Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv ,2015.Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MITPress, 2016. URL http://www.deeplearningbook.org .Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In 2013 IEEE international conference on acoustics, speech and signal processing ,pp. 6645–6649. IEEE, 2013.Shalabh Gupta and Asok Ray. Symbolic dynamic filtering for data-driven pattern recognition. Patternrecognition: theory and application , pp. 17–71, 2007.8Under review as a conference paper at ICLR 2017Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger,Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-endspeech recognition. arXiv preprint arXiv:1412.5567 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Ke Huang and Selin Aviyente. Sparse representation for signal classification. In Advances in neuralinformation processing systems , pp. 609–616, 2006.N. S. Jayant and Peter Noll. Digital Coding of Waveforms, Principles and Applications to Speechand Video , pp. 688. Prentice-Hall, 1984.George F. Jenks. The data model concept in statistical mapping. International yearbook of cartogra-phy, 7(1), 1967.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012a.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu-tional neural networks. In Advances in neural information processing systems , pp. 1097–1105,2012b.Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks ofrectified linear units. arXiv preprint arXiv:1504.00941 , 2015.Dan Li, Kerry D Wong, Yu Hen Hu, and Akbar M Sayeed. Detection, classification, and tracking oftargets. IEEE signal processing magazine , 19(2):17–29, 2002.Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou. A hand gesture recognition frameworkand wearable gesture-based interaction prototype for mobile devices. IEEE Transactions onHuman-Machine Systems , 44(2):293–299, 2014.Julien Mairal, Francis Bach, and Jean Ponce. Task-driven dictionary learning. IEEE Transactions onPattern Analysis and Machine Intelligence , 34(4):791–804, 2012.Ingo Mierswa and Katharina Morik. Automatic feature extraction for classifying audio data. Machinelearning , 58(2-3):127–149, 2005.Piotr Mirowski, Deepak Madhavan, Yann LeCun, and Ruben Kuzniecky. Classification of patterns ofeeg synchronization for seizure prediction. Clinical neurophysiology , 120(11):1927–1940, 2009.Joseph F Murray, Gordon F Hughes, and Kenneth Kreutz-Delgado. Machine learning methods forpredicting failures in hard drives: A multiple-instance application. Journal of Machine LearningResearch , 6(May):783–816, 2005.Venkatesh Rajagopalan and Asok Ray. Symbolic time series analysis via wavelet-based partitioning.Signal Processing , 86(11):3309 – 3320, 2006.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in neural information processing systems , pp. 3104–3112, 2014.9Under review as a conference paper at ICLR 20177 A PPENDIXTable 4: Number of trainable parameters for the three proposed embedding layers as well as totalparameters of the network used in the three studied application. For the WdE on heating systemdata, an approximate number is provided as the vocabulary size is dependent on the training set usedamong the three cross-validations splits.Hard-Disk data Seizure data Heating System dataMethod#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparameters#. of embed.parameters#. of totalparametersWdE 8144 8953 N/A N/A 713000715000SCE 112 501 2400 3569 1335 4046ICE 56 815 80 2465 69 310010
H1eu-hlVl
H12GRgcxg
ICLR.cc/2017/conference/-/paper166/official/review
{"title": "Training with Noisy Labels", "rating": "5: Marginally below acceptance threshold", "review": "This work address the problem of supervised learning from strongly labeled data with label noise. This is a very practical and relevant problem in applied machine learning. The authors note that using sampling approaches such as EM isn't effective, too slow and cannot be integrated into end-to-end training. Thus, they propose to simulate the effects of EM by a noisy adaptation layer, effectively a softmax, that is added to the architecture during training, and is omitted at inference time. The proposed algorithm is evaluated on MNIST and shows improvements over existing approaches that deal with noisy labeled data.\n\nA few comments.\n1. There is no discussion in the work about the increased complexity of training for the model with two softmaxes. \n\n2. What is the rationale for having consecutive (serialized) softmaxes, instead of having a compound objective with two losses, or a network with parallel losses and two sets of gradients?\n\n3. The proposed architecture with only two hidden layers isn't not representative of larger and deeper models that are practically used, and it is not clear that shown results will scale to bigger networks. \n\n4. Why is the approach only evaluated on MNIST, a dataset that is unrealistically simple.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Training deep neural-networks using a noise adaptation layer
["Jacob Goldberger", "Ehud Ben-Reuven"]
The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.
["Deep learning", "Optimization"]
https://openreview.net/forum?id=H12GRgcxg
https://openreview.net/pdf?id=H12GRgcxg
https://openreview.net/forum?id=H12GRgcxg&noteId=H1eu-hlVl
Published as a conference paper at ICLR 2017TRAINING DEEP NEURAL -NETWORKS USING A NOISEADAPTATION LAYERJacob Goldberger & Ehud Ben-ReuvenEngineering Faculty, Bar-Ilan University,Ramat-Gan 52900, Israeljacob.goldberger@biu.ac.il,udi.benreuven@gmail.comABSTRACTThe availability of large datsets has enabled neural networks to achieve impressiverecognition results. However, the presence of inaccurate class labels is known todeteriorate the performance of even the best classifiers in a broad range of classi-fication problems. Noisy labels also tend to be more harmful than noisy attributes.When the observed label is noisy, we can view the correct label as a latent ran-dom variable and model the noise processes by a communication channel withunknown parameters. Thus we can apply the EM algorithm to find the parametersof both the network and the noise and estimate the correct label. In this study wepresent a neural-network approach that optimizes the same likelihood function asoptimized by the EM algorithm. The noise is explicitly modeled by an additionalsoftmax layer that connects the correct labels to the noisy ones. This scheme isthen extended to the case where the noisy labels are dependent on the features inaddition to the correct labels. Experimental results demonstrate that this approachoutperforms previous methods.1 I NTRODUCTIONThe presence of class label noise inherent to training samples has been reported to deteriorate theperformance of even the best classifiers in a broad range of classification problems (Nettleton et al.(2010), Pechenizkiy et al. (2006), Zhu & Wu (2004)). Noisy labels also tend to be more harmfulthan noisy attributes (Zhu & Wu (2004)). Noisy data are usually related to the data collectionprocess. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate.However, this assumption often does not hold since labels that are provided by human judgmentsare subjective. Many of the largest image datasets have been extracted from social networks. Theseimages are labeled by non-expert users and building a consistent model based on a precisely labeledtraining set is very tedious. Mislabeling examples have been reported even in critical applicationssuch as biomedical datasets where the available data are restricted (Alon et al. (1999)). A verycommon approach to noisy datasets is to remove the suspect samples in a preprocessing stage or havethem relabeled by a data expert (Brodley & Friedl (1999)). However, these methods are not scalableand may run the risk of removing crucial examples that can impact small datasets considerably.Variants that are noise robust have been proposed for the most common classifiers such as logistic-regression and SVM (Fr ́enay & Verleysen (2014), Jakramate & Kab ́an (2012), Beigman & Klebanov(2009)). However, classifiers based on label-noise robust algorithms are still affected by label noise.From a theoretical point of view, Bartlett et al. (2006) showed that most loss functions are not com-pletely robust to label noise. Natarajan et al. (2013) proposed a generic unbiased estimator for binaryclassification with noisy labels. They developed a surrogate cost function that can be expressed bya weighted sum of the original cost functions, and provided asymptotic bounds for performance.Grandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex-treme case of noisy label data. They suggested a semi-supervised algorithm that encourages theclassifier to predict the non-labeled data with high confidence by adding a regularization term to thecost function. The problem of classification with label noise is an active research area. Comprehen-sive up-to-date reviews of both the theoretical and applied aspects of classification with label noisecan be found in Fr ́enay & Kaban (2014) and Fr ́enay & Verleysen (2014).1Published as a conference paper at ICLR 2017In spite of the huge success of deep learning there are not many studies that have explicitly attemptedto address the problem of Neural Net (NN) training using data with unreliable labels. Larsen et al.(1998) introduced a single noise parameter that can be calculated by adding a new regularizationterm and cross validation. Minh & Hinton (2012) proposed a more realistic noise model that de-pends on the true label. However, they only considered the binary classification case. Sukhbaatar& Fergus (2014) recently proposed adding a constrained linear layer at the top of the softmax layer,and showed that only under some strong assumptions can the linear layer be interpreted as the tran-sition matrix between the true and noisy (observed) labels and the softmax output layer as the trueprobabilities of the labels. Reed et al. (2014) suggested handling the unreliability of the training datalabels by maximizing the likelihood function with an additional classification entropy regularizationterm.The correct unknown label can be viewed as a hidden random variable. Hence, it is natural to applythe EM algorithm where in the E-step we estimate the true label and in the M-step we retrain thenetwork. Several variations of this paradigm have been proposed (e.g. Minh & Hinton (2012),Bekker & Goldberger (2016)). However, iterating between EM-steps and neural network trainingdoes not scale well. In this study we use latent variable probabilistic modeling but we optimize thelikelihood score function within the framework of neural networks. Current noisy label approachesassume either implicitly or explicitly that, given the correct label, the noisy label is independentof the feature vector. This assumption is probably needed to simplify the modeling and deriveapplicable learning algorithms. However, in many cases this assumption is not realistic since awrong annotation is more likely to occur in cases where the features are misleading. By contrast,our framework makes it easy to extend the proposed learning algorithm to the case where the noiseis dependent on both the correct label and the input features. In the next section we describe a modelformulation and review the EM based approach. In Section 3 we described our method which isbased on adding another softmax layer to the network and in Section 4 we present our results.2 A PROBABILISTIC FRAMEWORK FOR NOISY LABELSAssume we want to train a multi-class neural-network soft-classifier p(y=ijx;w)wherexis thefeature vector, wis the network parameter-set and iis a member of the class-set f1;:::;kg. Wefurther assume that in the training process we cannot directly observe the correct label y. Instead,we only have access to a noisy version of it denoted by z. Here we follow the probabilistic modelingand the EM learning approach described in Bekker & Goldberger (2016). In this approach noisegeneration is assumed to be independent of the features and is modeled by a parameter (i;j) =p(z=jjy=i). The noise distribution is unknown and we want to learn it as part of the trainingphase. The probability of observing a noisy label zgiven the feature vector xis:p(z=jjx;w;) =kXi=1p(z=jjy=i;)p(y=ijx;w) (1)wherekis the number of classes. The model is illustrated in the following diagram:Neural-Networkwnoisy channelx y zIn the training phase we are given nfeature vectors x1;:::;x nwith the corresponding noisy la-belsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:L(w;) =nXt=1log(kXi=1p(ztjyt=i;)p(yt=ijxt;w)) (2)Based on the training data, the goal is to find both the noise distribution and the Neural Networkparameterswthat maximize the likelihood function. Since the random variables y1;:::;y nare hid-den, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of2Published as a conference paper at ICLR 2017each EM iteration we estimate the hidden true data labels based on the noisy labels and the currentparameters:cti=p(yt=ijxt;zt;w0;0); i = 1;:::;k; t = 1;:::;n (3)wherew0and0are the current parameter estimations. In the M-step we update both the NN andthe noisy channel parameters. The updated noise distribution has a closed-form solution.(i;j) =Ptcti1fzt=jgPtcti; i;j2f1;:::;kg (4)Thekkmatrixcan be viewed as a confusion matrix between the soft estimates of the true labelfctiji= 1;:::;kgand the observed noisy labels zt. As part of the EM M-step, to find the updatedNN parameter wwe need to maximize the following function:S(w) =nXt=1kXi=1ctilogp(yt=ijxt;w) (5)which is a soft-version of the likelihood function of the fully observed case, based on the currentestimate of the true labels. The back-propagation derivatives of the function (5) that we maximizein the M-step are:@S@ui=nXt=1(p(yt=ijxt;zt;w0;0)p(yt=ijxt;w))h(xt) (6)such thathis the final hidden layer and u1;:::;u kare the parameters of the soft-max output layer.The method reviewed here is closely related to the work of Minh & Hinton (2012). They addressedthe problem of mislabeled data points in a particular type of dataset (aerial images). The maindifference is that in their approach they assumed that they do not learn the noise parameter. Insteadthey assume that the noise model can be separately tuned using a validation set or set by hand. Notethat even if the true noise parameters are given, we still need the apply the EM iterative procedure.However, this assumption makes the interaction between the E-step and the NN learning mucheasier since each time a data-point xtis visited we can compute the p(yt=ijxt;zt)based on thecurrent network parameters and the pre-defined noise parameters. Motivated by the need for modelcompression, Hinton et al. (2014) introduced an approach to learn a “distilled” model by traininga more compact neural network to reproduce the output of a larger network. Using the notationdefined above, in the second training stage they actually optimized the cost function: S(w) =Pnt=1Pki=1p(yt=ijxt;w0;0) logp(yt=i;xt;w)such thatw0is the parameter of the largernetwork that was trained using the labels z1;:::;z n,wis the parameter of the smaller network and0(i;j)in this case is a non-informative distribution (i.e. 0(i;j) = 1=k).There are several drawbacks to the EM-based approach described above. The EM algorithm isa greedy optimization procedure that is notoriously known to get stuck in local optima. Anotherpotential issue with combining neural networks and EM direction is scalability. The frameworkrequires training a neural network in each iteration of the EM algorithm. For real-world, large-scalenetworks, even a single training iteration is a non-trivial challenge. Moreover, in many domains(e.g. object recognition in images) the number of labels is very large, so many EM iterations arelikely to be needed for convergence. Another drawback of the probabilistic models is that they arebased on the simplistic assumption that the noise error is only based on the true labels but not on theinput features. In this study we propose a method for training neural networks with noisy labels thatsuccessfully addresses all these problems.3 T RAINING DEEP NEURAL NETWORKS USING A NOISE ADAPTATION LAYERIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood function(2). In this section we describe an algorithm that optimizes the same function within the frameworkof neural networks. Assume the neural network classifier we are using is based on non-linear inter-mediate layers followed by a soft-max output layer used for soft classification. Denote the non-linear3Published as a conference paper at ICLR 2017function applied on an input xbyh=h(x)and denote the soft-max layer that predicts the true ylabel by:p(y=ijx;w) =exp(u>ih+bi)Pkl=1exp(u>lh+bl); i = 1;:::;k (7)wherewis the network parameter-set (including the softmax layer). We next add another softmaxoutput layer to predict the noisy label zbased on both the true label and the input features:p(z=jjy=i;x) =exp(u>ijh+bij)Plexp(u>ilh+bil)(8)p(z=jjx) =Xip(z=jjy=i;x)p(y=ijx) (9)We can also define a simplified version where the noisy label only depends on the true label; i.e. weassume that labels flips are independent of x:p(z=jjy=i) =exp(bij)Plexp(bil)(10)p(z=jjx) =Xip(z=jjy=i)p(y=ijx) (11)We denote the two noise modeling variants as the complex model (c-model) (8) and the simplemodel (s-model) (10). Hereafter we use the notation wnoisefor all the parameters of the secondsoftmax layer which can be viewed as a noise adaptation layer.In the training phase we are given nfeature vectors x1;:::;x nwith corresponding noisy labelsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:S(w;w noise) =Xtlogp(ztjxt) =Xtlog(Xip(ztjyt=i;xt;wnoise)p(yt=ijxt;w)) (12)Since the noise is modeled by adding another layer to the network, the score S(w;w noise)can beoptimized using standard techniques for neural network training. By settingp(z=jjy=i) =(i;j) =exp(bij)Plexp(bil); (13)it can easily verified that, by using either the EM algorithm (2) or the s-model neural networkscheme (12), we are actually optimizing exactly the same function. Thus the neural network withthe s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm.Instead of alternating between optimizing the noisy model and the network classifier, we considerthem as components of the same network and optimize them simultaneously.non-linear functionwsoft-maxwsoft-maxwnoisex h h, y znon-linear functionwsoft-maxwx h yFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above)and test phase (below).4Published as a conference paper at ICLR 2017Note that in the c-model, where the noise is also dependent on the input features, we can still applythe EM algorithm to learn the parameters of the additional noise layer. However, there is no closed-form solution in the M-step for the optimal parameters and we need to apply neural-network trainingin the M-step to find the noise-layer parameters.At test time we want to predict the true labels. Hence, we remove the last softmax layer that aims toget rid of the noise in the training set. We compute the true-label softmax estimation p(y=ijx;w)(7). The proposed architecture for training the neural network based on training data with noisylabels is illustrated in Figure 1.There are degrees of freedom in the two softmax layer model. Hence, a careful initialization of theparameters of the noise adaptation layer is crucial for successful convergence of the network intoa good classifier of the correct labels at test time. We used the parameters of the original networkto initialize the parameters of the s-model network that contains the noise adaptation level. We caninitialize the softmax parameters of the s-model by assuming a small uniform noise:bij= log((1)1fi=jg+k11fi6=jg)such thatkis the number of different classes. A better procedure is to first train the original NNwithout the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat thelabels produced by the NN as the true labels and compute the confusion matrix on the train set andused it as an initial value for the bias parameters:bij= log(Pt1fzt=jgp(yt=ijxt)Ptp(yt=ijxt))such thatx1;:::;x nare the feature vectors of the training dataset and z1;:::;z nare the correspondingnoisy labels. So far we have concentrated on parameter initialization for the s-model. The strategythat works best to initialize the c-model parameters is to use the parameters that were optimized forthe s-model. In other words we set linear terms uijto zero and initialize the bias terms bijwith thevalues that were optimized by the s-model.The computational complexity of the proposed method is quadratic in the size of the class-set. Sup-pose there are kclasses to predict, in this case the proposed methods require k+1sets of softmaxoperations with a size of keach. Hence there are scalability problems when the class set is large. Aswe explained in the previous paragraph, we initialized the second soft-max layer using the confusionmatrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assumethe rows of the matrix correspond to the true labels and the matrix columns correspond to the noisylabels. Thellargest elements in the i-th row are the most frequent noisy class values when the trueclass value is i. We can thus connect the i-th element in the first softmax layer only to its lmostprobable noisy class candidates. Note that if we connect the i-th label in the first softmax only to thei-th label in the second softmax layer, the second softmax layer collapses to identity and we obtainthe standard baseline model. Taking the lmost likely connections to the second softmax layer, weallow an additional l1possible noisy labels for each correct label. We thus obtain a data drivensparsifying of the second softmax layer which solves the scalability problem since the complexitybecomes linear in the number of classes instead of quadratic. In the experiment section we showthat by using this approach there is not much deference in performance.Our architecture, which is based on a concatenation of softmax layers, resembles the hierarchicalsoftmax approach Morin & Bengio (2005) that replaces the flat softmax layer with a hierarchicallayer that has the classes as leaves. This allowed them to decompose calculating the probabilityof the class into a sequence of probability calculations, which saves us from having to calculatethe expensive normalization over all classes. The main difference between our approach and theirs(apart from the motivation) is that in our approach the true-label softmax layer is fully connectedto the noisy-label layer. Sukhbaatar & Fergus (2014) suggested adding a linear layer to handlenoisy labels. Their approach is similar to our s-model. In their approach, however, they proposed adifferent learning procedure.4 E XPERIMENTSIn this section, we evaluate the robustness of deep learning to training data with noisy labels withand without explicit noise modeling. We first show results on the MNIST data-set with injected label5Published as a conference paper at ICLR 2017(a) 20% dataset (b) 50% dataset(c) 100% datasetFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level.The results are shown for several training data sizes (20%,50%,100%) of the training subset.noise in our experiments. The MNIST is a database of handwritten digits, which consists of 2828images. The dataset has 60k images for training and 10k images for testing. We used a two hiddenlayer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU andwe used dropout with parameter 0.5. We trained the network using the Adam optimizer (Kingma& Ba (2014)) with default parameters, which we found to converge more quickly and effectivelythan SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experimentsdescribed below. In addition to a network that is based on fully connected layers, we also applied anetwork based on a CNN architecture. The results we obtained in the two architectures were similar.The network we implemented is publicly available1.We generated noisy data from clean data by stochastically changing some of the labels. We con-verted each label with probability pto a different label according to a predefined permutation. Weused the same permutation as in Reed et al. (2014). The labels of the test data remained, of course,unperturbed to validate and compare our method to the regular approach.We compared the proposed noise robust models to other model training strategies. The first networkwas the baseline approach that ignores the fact that the labels of the training data are unreliable.Denote the observed noisy label by zand the softmax decision by q1;:::;q k. The baseline log-likelihood score (for a single input) is:S=Xi1fz=iglog(qi)1code available at https://github.com/udibr/noisy_labels6Published as a conference paper at ICLR 2017Figure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results are shown for several training data sizes (20%,50%,100%) of the training subsetfor a CNN network architecture).We also implemented two variants of the noise robust approach proposed by Reed et al. (2014).They suggested a soft versionS(1)H(q) =Xi1fz=iglog(qi) + (1)Xiqilog(qi)and a hard version:S+ (1) maxilog(qi)In their experiments they took = 0:8for the hard version and = 0:95for the soft version, andobserved that the hard version provided better results. Finally we implemented the two variants ofour approach; namely, the noise modeling based only on the labels (s-model) and the noise modelingthat was also based on the features (c-model).Figure 2 depicts the comparative test errors results as a function of the fractions of noise. The resultsare shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST trainingsubset. Bootstrapping was used to compute confidence intervals around the mean. For 1000 times,N= 10 samples were randomly drawn with repeats from the Navailable samples and mean wascomputed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.The results show that all the methods that are explicitly aware of the noise in the labels are betterthan the baseline which is the standard training approach. We revalidated the results reported in Reedet al. (2014) and showed that the hard version of their method performs better than the soft version.In all cases our models performed better than the alternatives. In most cases the c-model was betterthan the s-model. In the case where the entire dataset was used for training, we can see from theresults that there was a phase transition phenomenon. We obtained almost perfect classificationresults until the noise level was high and there was a sudden strong performance drop. Analyzingwhy this effect occurred is left for future research.We next show the results on the CIFAR-100 image dataset Krizhevsky & Hinton (2009) which con-sists of 3232color images arranged in 100 classes containing 600 images each. There are 500training images and 100 testing images per class. We used raw images directly without any pre-processing or augmentation. We generated noisy data from clean data by stochastically changingsome of the labels. We converted each one of the 100 labels with probability pto a different labelaccording to a predefined permutation. The labels of the test data remained, of course, unperturbedto validate and compare our method to the regular approach. We used a CNN network with twoconvolutional layers combined with ReLU activation and max-pooling, followed by two fully con-nected layers. Figure 3 depicts the comparative test errors results as a function of the fractionsof noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 training7Published as a conference paper at ICLR 2017Figure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results of regular and sparse second softmax layers are shown for several training datasizes (20%,50%,100%) of the training subset .subset. Bootstrapping was used to compute confidence intervals around the mean in the same wayas for the MNIST experiment. The results showed that the proposed method works better than thealternatives. The simple model consistently provided the best results but when the noise level wasvery high the complex method tended to perform better.We next report experimental results for the sparse variant of our method that remains efficient evenwhen the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consistsof 100 possible classes. For each class we only took the five most probable classes in the confusionmatrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure 4,sparsifying the second softmax layer did not not result in a drop in performance5 C ONCLUSIONIn this paper we investigated the problem of training neural networks that are robust to label noise.We proposed an algorithm for training neural networks based solely on noisy data where the noisedistribution is unknown. We showed that we can reliably learn the noise distribution from the noisydata without using any clean data which, in many cases, are not available. The algorithm can beeasily combined with any existing deep learning implementation by simply adding another softmaxoutput layer. Our results encourage collecting more data at a cheaper price, since mistaken datalabels can be less harmful to performance. One possible future research direction would be togeneralize our learning scheme to cases where both the features and the labels are noisy. We showedresults on datasets with small and medium sized class-sets. Future research direction would be toevaluate the performance and efficiency of the proposed method on tasks with large class-sets.ACKNOWLEDGMENTSThis work is supported by the Intel Collaborative Research Institute for Computational Intelligence(ICRI-CI).REFERENCESU. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns ofgene expression revealed by clustering analysis of tumor and normal colon tissues probed byoligonucleotide arrays. Proceedings of the National Academy of Sciences , 96(12):6745–6750,1999.P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journalof the American Statistical Association , pp. 138–156, 2006.E. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL-IJCNLP , 2009.8Published as a conference paper at ICLR 2017A. Bekker and J. Goldberger. Training deep neural-networks based on unreliable labels. In IEEEInt.l Conference on Acoustics, Speech and Signal Processing (ICASSP) , pp. 2682–2686, 2016.C. Brodley and M. Friedl. Identifying mislabeled training data. J. Artif. Intell. Res.(JAIR) , 11:131–167, 1999.B. Fr ́enay and A. Kaban. A comprehensive introduction to label noise. In European Symposium onArtificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) , 2014.B. Fr ́enay and M. Verleysen. Classification in the presence of label noise: a survey. IEEE Trans. onNeural Networks and Learning Systems , 25(5):845–869, 2014.Y . Grandvalet and Y . Bengio. Semi-supervised learning by entropy minimization. In Advances inNeural Information Processing Systems (NIPS) , 2005.G.E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS DeepLearning and Representation Learning Workshop , 2014.B. Jakramate and A. Kab ́an. Label-noise robust logistic regression and its applications. In MachineLearning and Knowledge Discovery in Databases , pp. 143–158. 2012.D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technicalreport, Computer Science Department, University of Toronto, 2009.J. Larsen, L. Nonboe, M. Hintz-Madsen, and K. L. Hansen. Design of robust neural network classi-fiers. In Int. Conf. on Acoustics, Speech and Signal Processing , pp. 1205–1208, 1998.V . Minh and G. Hinton. Learning to label aerial images from noisy data. In Int. Conf. on MachineLearning (ICML) , 2012.F. Morin and Y . Bengio. Hierarchical probabilistic neural network language model. In Aistats ,volume 5, pp. 246–252, 2005.N. Natarajan, I. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances inNeural Information Processing Systems (NIPS) , 2013.D. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise onthe precision of supervised learning techniques. Artificial intelligence review , 2010.M. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn-ing in medical domains: The effect of feature extraction. In Computer-Based Medical Systems(CBMS) , 2006.S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neuralnetworks on noisy labels with bootstrapping. In arXiv preprint arXiv:1412.6596 , 2014.S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXivpreprint arXiv:1406.2080 , 2014.X. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial IntelligenceReview , 22(3):177–210, 2004.9
BkL101z4e
H12GRgcxg
ICLR.cc/2017/conference/-/paper166/official/review
{"title": "Interesting paper but lack of experiments", "rating": "7: Good paper, accept", "review": "The paper addressed the erroneous label problem for supervised training. The problem is well formulated and the presented solution is novel. \n\nThe experimental justification is limited. The effectiveness of the proposed method is hard to gauge, especially how to scale the proposed method to large number of classification targets and whether it is still effective.\n\nFor example, it would be interesting to see whether the proposed method is better than training with only less but high quality data. \n\nFrom Figure 2, it seems with more data, the proposed method tends to behave very well when the noise fraction is below a threshold and dramatically degrades once passing that threshold. Analysis and justification of this behavior whether it is just by chance or an expected one of the method would be very useful. \n\n ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Training deep neural-networks using a noise adaptation layer
["Jacob Goldberger", "Ehud Ben-Reuven"]
The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.
["Deep learning", "Optimization"]
https://openreview.net/forum?id=H12GRgcxg
https://openreview.net/pdf?id=H12GRgcxg
https://openreview.net/forum?id=H12GRgcxg&noteId=BkL101z4e
Published as a conference paper at ICLR 2017TRAINING DEEP NEURAL -NETWORKS USING A NOISEADAPTATION LAYERJacob Goldberger & Ehud Ben-ReuvenEngineering Faculty, Bar-Ilan University,Ramat-Gan 52900, Israeljacob.goldberger@biu.ac.il,udi.benreuven@gmail.comABSTRACTThe availability of large datsets has enabled neural networks to achieve impressiverecognition results. However, the presence of inaccurate class labels is known todeteriorate the performance of even the best classifiers in a broad range of classi-fication problems. Noisy labels also tend to be more harmful than noisy attributes.When the observed label is noisy, we can view the correct label as a latent ran-dom variable and model the noise processes by a communication channel withunknown parameters. Thus we can apply the EM algorithm to find the parametersof both the network and the noise and estimate the correct label. In this study wepresent a neural-network approach that optimizes the same likelihood function asoptimized by the EM algorithm. The noise is explicitly modeled by an additionalsoftmax layer that connects the correct labels to the noisy ones. This scheme isthen extended to the case where the noisy labels are dependent on the features inaddition to the correct labels. Experimental results demonstrate that this approachoutperforms previous methods.1 I NTRODUCTIONThe presence of class label noise inherent to training samples has been reported to deteriorate theperformance of even the best classifiers in a broad range of classification problems (Nettleton et al.(2010), Pechenizkiy et al. (2006), Zhu & Wu (2004)). Noisy labels also tend to be more harmfulthan noisy attributes (Zhu & Wu (2004)). Noisy data are usually related to the data collectionprocess. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate.However, this assumption often does not hold since labels that are provided by human judgmentsare subjective. Many of the largest image datasets have been extracted from social networks. Theseimages are labeled by non-expert users and building a consistent model based on a precisely labeledtraining set is very tedious. Mislabeling examples have been reported even in critical applicationssuch as biomedical datasets where the available data are restricted (Alon et al. (1999)). A verycommon approach to noisy datasets is to remove the suspect samples in a preprocessing stage or havethem relabeled by a data expert (Brodley & Friedl (1999)). However, these methods are not scalableand may run the risk of removing crucial examples that can impact small datasets considerably.Variants that are noise robust have been proposed for the most common classifiers such as logistic-regression and SVM (Fr ́enay & Verleysen (2014), Jakramate & Kab ́an (2012), Beigman & Klebanov(2009)). However, classifiers based on label-noise robust algorithms are still affected by label noise.From a theoretical point of view, Bartlett et al. (2006) showed that most loss functions are not com-pletely robust to label noise. Natarajan et al. (2013) proposed a generic unbiased estimator for binaryclassification with noisy labels. They developed a surrogate cost function that can be expressed bya weighted sum of the original cost functions, and provided asymptotic bounds for performance.Grandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex-treme case of noisy label data. They suggested a semi-supervised algorithm that encourages theclassifier to predict the non-labeled data with high confidence by adding a regularization term to thecost function. The problem of classification with label noise is an active research area. Comprehen-sive up-to-date reviews of both the theoretical and applied aspects of classification with label noisecan be found in Fr ́enay & Kaban (2014) and Fr ́enay & Verleysen (2014).1Published as a conference paper at ICLR 2017In spite of the huge success of deep learning there are not many studies that have explicitly attemptedto address the problem of Neural Net (NN) training using data with unreliable labels. Larsen et al.(1998) introduced a single noise parameter that can be calculated by adding a new regularizationterm and cross validation. Minh & Hinton (2012) proposed a more realistic noise model that de-pends on the true label. However, they only considered the binary classification case. Sukhbaatar& Fergus (2014) recently proposed adding a constrained linear layer at the top of the softmax layer,and showed that only under some strong assumptions can the linear layer be interpreted as the tran-sition matrix between the true and noisy (observed) labels and the softmax output layer as the trueprobabilities of the labels. Reed et al. (2014) suggested handling the unreliability of the training datalabels by maximizing the likelihood function with an additional classification entropy regularizationterm.The correct unknown label can be viewed as a hidden random variable. Hence, it is natural to applythe EM algorithm where in the E-step we estimate the true label and in the M-step we retrain thenetwork. Several variations of this paradigm have been proposed (e.g. Minh & Hinton (2012),Bekker & Goldberger (2016)). However, iterating between EM-steps and neural network trainingdoes not scale well. In this study we use latent variable probabilistic modeling but we optimize thelikelihood score function within the framework of neural networks. Current noisy label approachesassume either implicitly or explicitly that, given the correct label, the noisy label is independentof the feature vector. This assumption is probably needed to simplify the modeling and deriveapplicable learning algorithms. However, in many cases this assumption is not realistic since awrong annotation is more likely to occur in cases where the features are misleading. By contrast,our framework makes it easy to extend the proposed learning algorithm to the case where the noiseis dependent on both the correct label and the input features. In the next section we describe a modelformulation and review the EM based approach. In Section 3 we described our method which isbased on adding another softmax layer to the network and in Section 4 we present our results.2 A PROBABILISTIC FRAMEWORK FOR NOISY LABELSAssume we want to train a multi-class neural-network soft-classifier p(y=ijx;w)wherexis thefeature vector, wis the network parameter-set and iis a member of the class-set f1;:::;kg. Wefurther assume that in the training process we cannot directly observe the correct label y. Instead,we only have access to a noisy version of it denoted by z. Here we follow the probabilistic modelingand the EM learning approach described in Bekker & Goldberger (2016). In this approach noisegeneration is assumed to be independent of the features and is modeled by a parameter (i;j) =p(z=jjy=i). The noise distribution is unknown and we want to learn it as part of the trainingphase. The probability of observing a noisy label zgiven the feature vector xis:p(z=jjx;w;) =kXi=1p(z=jjy=i;)p(y=ijx;w) (1)wherekis the number of classes. The model is illustrated in the following diagram:Neural-Networkwnoisy channelx y zIn the training phase we are given nfeature vectors x1;:::;x nwith the corresponding noisy la-belsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:L(w;) =nXt=1log(kXi=1p(ztjyt=i;)p(yt=ijxt;w)) (2)Based on the training data, the goal is to find both the noise distribution and the Neural Networkparameterswthat maximize the likelihood function. Since the random variables y1;:::;y nare hid-den, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of2Published as a conference paper at ICLR 2017each EM iteration we estimate the hidden true data labels based on the noisy labels and the currentparameters:cti=p(yt=ijxt;zt;w0;0); i = 1;:::;k; t = 1;:::;n (3)wherew0and0are the current parameter estimations. In the M-step we update both the NN andthe noisy channel parameters. The updated noise distribution has a closed-form solution.(i;j) =Ptcti1fzt=jgPtcti; i;j2f1;:::;kg (4)Thekkmatrixcan be viewed as a confusion matrix between the soft estimates of the true labelfctiji= 1;:::;kgand the observed noisy labels zt. As part of the EM M-step, to find the updatedNN parameter wwe need to maximize the following function:S(w) =nXt=1kXi=1ctilogp(yt=ijxt;w) (5)which is a soft-version of the likelihood function of the fully observed case, based on the currentestimate of the true labels. The back-propagation derivatives of the function (5) that we maximizein the M-step are:@S@ui=nXt=1(p(yt=ijxt;zt;w0;0)p(yt=ijxt;w))h(xt) (6)such thathis the final hidden layer and u1;:::;u kare the parameters of the soft-max output layer.The method reviewed here is closely related to the work of Minh & Hinton (2012). They addressedthe problem of mislabeled data points in a particular type of dataset (aerial images). The maindifference is that in their approach they assumed that they do not learn the noise parameter. Insteadthey assume that the noise model can be separately tuned using a validation set or set by hand. Notethat even if the true noise parameters are given, we still need the apply the EM iterative procedure.However, this assumption makes the interaction between the E-step and the NN learning mucheasier since each time a data-point xtis visited we can compute the p(yt=ijxt;zt)based on thecurrent network parameters and the pre-defined noise parameters. Motivated by the need for modelcompression, Hinton et al. (2014) introduced an approach to learn a “distilled” model by traininga more compact neural network to reproduce the output of a larger network. Using the notationdefined above, in the second training stage they actually optimized the cost function: S(w) =Pnt=1Pki=1p(yt=ijxt;w0;0) logp(yt=i;xt;w)such thatw0is the parameter of the largernetwork that was trained using the labels z1;:::;z n,wis the parameter of the smaller network and0(i;j)in this case is a non-informative distribution (i.e. 0(i;j) = 1=k).There are several drawbacks to the EM-based approach described above. The EM algorithm isa greedy optimization procedure that is notoriously known to get stuck in local optima. Anotherpotential issue with combining neural networks and EM direction is scalability. The frameworkrequires training a neural network in each iteration of the EM algorithm. For real-world, large-scalenetworks, even a single training iteration is a non-trivial challenge. Moreover, in many domains(e.g. object recognition in images) the number of labels is very large, so many EM iterations arelikely to be needed for convergence. Another drawback of the probabilistic models is that they arebased on the simplistic assumption that the noise error is only based on the true labels but not on theinput features. In this study we propose a method for training neural networks with noisy labels thatsuccessfully addresses all these problems.3 T RAINING DEEP NEURAL NETWORKS USING A NOISE ADAPTATION LAYERIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood function(2). In this section we describe an algorithm that optimizes the same function within the frameworkof neural networks. Assume the neural network classifier we are using is based on non-linear inter-mediate layers followed by a soft-max output layer used for soft classification. Denote the non-linear3Published as a conference paper at ICLR 2017function applied on an input xbyh=h(x)and denote the soft-max layer that predicts the true ylabel by:p(y=ijx;w) =exp(u>ih+bi)Pkl=1exp(u>lh+bl); i = 1;:::;k (7)wherewis the network parameter-set (including the softmax layer). We next add another softmaxoutput layer to predict the noisy label zbased on both the true label and the input features:p(z=jjy=i;x) =exp(u>ijh+bij)Plexp(u>ilh+bil)(8)p(z=jjx) =Xip(z=jjy=i;x)p(y=ijx) (9)We can also define a simplified version where the noisy label only depends on the true label; i.e. weassume that labels flips are independent of x:p(z=jjy=i) =exp(bij)Plexp(bil)(10)p(z=jjx) =Xip(z=jjy=i)p(y=ijx) (11)We denote the two noise modeling variants as the complex model (c-model) (8) and the simplemodel (s-model) (10). Hereafter we use the notation wnoisefor all the parameters of the secondsoftmax layer which can be viewed as a noise adaptation layer.In the training phase we are given nfeature vectors x1;:::;x nwith corresponding noisy labelsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:S(w;w noise) =Xtlogp(ztjxt) =Xtlog(Xip(ztjyt=i;xt;wnoise)p(yt=ijxt;w)) (12)Since the noise is modeled by adding another layer to the network, the score S(w;w noise)can beoptimized using standard techniques for neural network training. By settingp(z=jjy=i) =(i;j) =exp(bij)Plexp(bil); (13)it can easily verified that, by using either the EM algorithm (2) or the s-model neural networkscheme (12), we are actually optimizing exactly the same function. Thus the neural network withthe s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm.Instead of alternating between optimizing the noisy model and the network classifier, we considerthem as components of the same network and optimize them simultaneously.non-linear functionwsoft-maxwsoft-maxwnoisex h h, y znon-linear functionwsoft-maxwx h yFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above)and test phase (below).4Published as a conference paper at ICLR 2017Note that in the c-model, where the noise is also dependent on the input features, we can still applythe EM algorithm to learn the parameters of the additional noise layer. However, there is no closed-form solution in the M-step for the optimal parameters and we need to apply neural-network trainingin the M-step to find the noise-layer parameters.At test time we want to predict the true labels. Hence, we remove the last softmax layer that aims toget rid of the noise in the training set. We compute the true-label softmax estimation p(y=ijx;w)(7). The proposed architecture for training the neural network based on training data with noisylabels is illustrated in Figure 1.There are degrees of freedom in the two softmax layer model. Hence, a careful initialization of theparameters of the noise adaptation layer is crucial for successful convergence of the network intoa good classifier of the correct labels at test time. We used the parameters of the original networkto initialize the parameters of the s-model network that contains the noise adaptation level. We caninitialize the softmax parameters of the s-model by assuming a small uniform noise:bij= log((1)1fi=jg+k11fi6=jg)such thatkis the number of different classes. A better procedure is to first train the original NNwithout the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat thelabels produced by the NN as the true labels and compute the confusion matrix on the train set andused it as an initial value for the bias parameters:bij= log(Pt1fzt=jgp(yt=ijxt)Ptp(yt=ijxt))such thatx1;:::;x nare the feature vectors of the training dataset and z1;:::;z nare the correspondingnoisy labels. So far we have concentrated on parameter initialization for the s-model. The strategythat works best to initialize the c-model parameters is to use the parameters that were optimized forthe s-model. In other words we set linear terms uijto zero and initialize the bias terms bijwith thevalues that were optimized by the s-model.The computational complexity of the proposed method is quadratic in the size of the class-set. Sup-pose there are kclasses to predict, in this case the proposed methods require k+1sets of softmaxoperations with a size of keach. Hence there are scalability problems when the class set is large. Aswe explained in the previous paragraph, we initialized the second soft-max layer using the confusionmatrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assumethe rows of the matrix correspond to the true labels and the matrix columns correspond to the noisylabels. Thellargest elements in the i-th row are the most frequent noisy class values when the trueclass value is i. We can thus connect the i-th element in the first softmax layer only to its lmostprobable noisy class candidates. Note that if we connect the i-th label in the first softmax only to thei-th label in the second softmax layer, the second softmax layer collapses to identity and we obtainthe standard baseline model. Taking the lmost likely connections to the second softmax layer, weallow an additional l1possible noisy labels for each correct label. We thus obtain a data drivensparsifying of the second softmax layer which solves the scalability problem since the complexitybecomes linear in the number of classes instead of quadratic. In the experiment section we showthat by using this approach there is not much deference in performance.Our architecture, which is based on a concatenation of softmax layers, resembles the hierarchicalsoftmax approach Morin & Bengio (2005) that replaces the flat softmax layer with a hierarchicallayer that has the classes as leaves. This allowed them to decompose calculating the probabilityof the class into a sequence of probability calculations, which saves us from having to calculatethe expensive normalization over all classes. The main difference between our approach and theirs(apart from the motivation) is that in our approach the true-label softmax layer is fully connectedto the noisy-label layer. Sukhbaatar & Fergus (2014) suggested adding a linear layer to handlenoisy labels. Their approach is similar to our s-model. In their approach, however, they proposed adifferent learning procedure.4 E XPERIMENTSIn this section, we evaluate the robustness of deep learning to training data with noisy labels withand without explicit noise modeling. We first show results on the MNIST data-set with injected label5Published as a conference paper at ICLR 2017(a) 20% dataset (b) 50% dataset(c) 100% datasetFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level.The results are shown for several training data sizes (20%,50%,100%) of the training subset.noise in our experiments. The MNIST is a database of handwritten digits, which consists of 2828images. The dataset has 60k images for training and 10k images for testing. We used a two hiddenlayer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU andwe used dropout with parameter 0.5. We trained the network using the Adam optimizer (Kingma& Ba (2014)) with default parameters, which we found to converge more quickly and effectivelythan SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experimentsdescribed below. In addition to a network that is based on fully connected layers, we also applied anetwork based on a CNN architecture. The results we obtained in the two architectures were similar.The network we implemented is publicly available1.We generated noisy data from clean data by stochastically changing some of the labels. We con-verted each label with probability pto a different label according to a predefined permutation. Weused the same permutation as in Reed et al. (2014). The labels of the test data remained, of course,unperturbed to validate and compare our method to the regular approach.We compared the proposed noise robust models to other model training strategies. The first networkwas the baseline approach that ignores the fact that the labels of the training data are unreliable.Denote the observed noisy label by zand the softmax decision by q1;:::;q k. The baseline log-likelihood score (for a single input) is:S=Xi1fz=iglog(qi)1code available at https://github.com/udibr/noisy_labels6Published as a conference paper at ICLR 2017Figure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results are shown for several training data sizes (20%,50%,100%) of the training subsetfor a CNN network architecture).We also implemented two variants of the noise robust approach proposed by Reed et al. (2014).They suggested a soft versionS(1)H(q) =Xi1fz=iglog(qi) + (1)Xiqilog(qi)and a hard version:S+ (1) maxilog(qi)In their experiments they took = 0:8for the hard version and = 0:95for the soft version, andobserved that the hard version provided better results. Finally we implemented the two variants ofour approach; namely, the noise modeling based only on the labels (s-model) and the noise modelingthat was also based on the features (c-model).Figure 2 depicts the comparative test errors results as a function of the fractions of noise. The resultsare shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST trainingsubset. Bootstrapping was used to compute confidence intervals around the mean. For 1000 times,N= 10 samples were randomly drawn with repeats from the Navailable samples and mean wascomputed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.The results show that all the methods that are explicitly aware of the noise in the labels are betterthan the baseline which is the standard training approach. We revalidated the results reported in Reedet al. (2014) and showed that the hard version of their method performs better than the soft version.In all cases our models performed better than the alternatives. In most cases the c-model was betterthan the s-model. In the case where the entire dataset was used for training, we can see from theresults that there was a phase transition phenomenon. We obtained almost perfect classificationresults until the noise level was high and there was a sudden strong performance drop. Analyzingwhy this effect occurred is left for future research.We next show the results on the CIFAR-100 image dataset Krizhevsky & Hinton (2009) which con-sists of 3232color images arranged in 100 classes containing 600 images each. There are 500training images and 100 testing images per class. We used raw images directly without any pre-processing or augmentation. We generated noisy data from clean data by stochastically changingsome of the labels. We converted each one of the 100 labels with probability pto a different labelaccording to a predefined permutation. The labels of the test data remained, of course, unperturbedto validate and compare our method to the regular approach. We used a CNN network with twoconvolutional layers combined with ReLU activation and max-pooling, followed by two fully con-nected layers. Figure 3 depicts the comparative test errors results as a function of the fractionsof noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 training7Published as a conference paper at ICLR 2017Figure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results of regular and sparse second softmax layers are shown for several training datasizes (20%,50%,100%) of the training subset .subset. Bootstrapping was used to compute confidence intervals around the mean in the same wayas for the MNIST experiment. The results showed that the proposed method works better than thealternatives. The simple model consistently provided the best results but when the noise level wasvery high the complex method tended to perform better.We next report experimental results for the sparse variant of our method that remains efficient evenwhen the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consistsof 100 possible classes. For each class we only took the five most probable classes in the confusionmatrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure 4,sparsifying the second softmax layer did not not result in a drop in performance5 C ONCLUSIONIn this paper we investigated the problem of training neural networks that are robust to label noise.We proposed an algorithm for training neural networks based solely on noisy data where the noisedistribution is unknown. We showed that we can reliably learn the noise distribution from the noisydata without using any clean data which, in many cases, are not available. The algorithm can beeasily combined with any existing deep learning implementation by simply adding another softmaxoutput layer. Our results encourage collecting more data at a cheaper price, since mistaken datalabels can be less harmful to performance. One possible future research direction would be togeneralize our learning scheme to cases where both the features and the labels are noisy. We showedresults on datasets with small and medium sized class-sets. Future research direction would be toevaluate the performance and efficiency of the proposed method on tasks with large class-sets.ACKNOWLEDGMENTSThis work is supported by the Intel Collaborative Research Institute for Computational Intelligence(ICRI-CI).REFERENCESU. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns ofgene expression revealed by clustering analysis of tumor and normal colon tissues probed byoligonucleotide arrays. Proceedings of the National Academy of Sciences , 96(12):6745–6750,1999.P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journalof the American Statistical Association , pp. 138–156, 2006.E. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL-IJCNLP , 2009.8Published as a conference paper at ICLR 2017A. Bekker and J. Goldberger. Training deep neural-networks based on unreliable labels. In IEEEInt.l Conference on Acoustics, Speech and Signal Processing (ICASSP) , pp. 2682–2686, 2016.C. Brodley and M. Friedl. Identifying mislabeled training data. J. Artif. Intell. Res.(JAIR) , 11:131–167, 1999.B. Fr ́enay and A. Kaban. A comprehensive introduction to label noise. In European Symposium onArtificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) , 2014.B. Fr ́enay and M. Verleysen. Classification in the presence of label noise: a survey. IEEE Trans. onNeural Networks and Learning Systems , 25(5):845–869, 2014.Y . Grandvalet and Y . Bengio. Semi-supervised learning by entropy minimization. In Advances inNeural Information Processing Systems (NIPS) , 2005.G.E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS DeepLearning and Representation Learning Workshop , 2014.B. Jakramate and A. Kab ́an. Label-noise robust logistic regression and its applications. In MachineLearning and Knowledge Discovery in Databases , pp. 143–158. 2012.D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technicalreport, Computer Science Department, University of Toronto, 2009.J. Larsen, L. Nonboe, M. Hintz-Madsen, and K. L. Hansen. Design of robust neural network classi-fiers. In Int. Conf. on Acoustics, Speech and Signal Processing , pp. 1205–1208, 1998.V . Minh and G. Hinton. Learning to label aerial images from noisy data. In Int. Conf. on MachineLearning (ICML) , 2012.F. Morin and Y . Bengio. Hierarchical probabilistic neural network language model. In Aistats ,volume 5, pp. 246–252, 2005.N. Natarajan, I. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances inNeural Information Processing Systems (NIPS) , 2013.D. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise onthe precision of supervised learning techniques. Artificial intelligence review , 2010.M. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn-ing in medical domains: The effect of feature extraction. In Computer-Based Medical Systems(CBMS) , 2006.S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neuralnetworks on noisy labels with bootstrapping. In arXiv preprint arXiv:1412.6596 , 2014.S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXivpreprint arXiv:1406.2080 , 2014.X. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial IntelligenceReview , 22(3):177–210, 2004.9
BksCTxHEe
H12GRgcxg
ICLR.cc/2017/conference/-/paper166/official/review
{"title": "This paper investigates how to make neural nets be more robust to noise in the labels", "rating": "5: Marginally below acceptance threshold", "review": "This paper looks at how to train if there are significant label noise present.\nThis is a good paper where two main methods are proposed, the first one is a latent variable model and training would require the EM algorithm, alternating between estimating the true label and maximizing the parameters given a true label.\n\nThe second directly integrates out the true label and simply optimizes the p(z|x).\n\nPros: the paper examines a training scenario which is a real concern for big dataset which are not carefully annotated.\nCons: the results on mnist is all synthetic and it's hard to tell if this would translate to a win on real datasets.\n\n- comments:\nEquation 11 should be expensive, what happens if you are training on imagenet with 1000 classes?\nIt would be nice to see how well you can recover the corrupting distribution parameter using either the EM or the integration method. \n\nOverall, this is an OK paper. However, the ideas are not novel as previous cited papers have tried to handle noise in the labels. I think the authors can make the paper better by either demonstrating state-of-the-art results on a dataset known to have label noise, or demonstrate that a method can reliably estimate the true label corrupting probabilities.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Training deep neural-networks using a noise adaptation layer
["Jacob Goldberger", "Ehud Ben-Reuven"]
The availability of large datsets has enabled neural networks to achieve impressive recognition results. However, the presence of inaccurate class labels is known to deteriorate the performance of even the best classifiers in a broad range of classification problems. Noisy labels also tend to be more harmful than noisy attributes. When the observed label is noisy, we can view the correct label as a latent random variable and model the noise processes by a communication channel with unknown parameters. Thus we can apply the EM algorithm to find the parameters of both the network and the noise and to estimate the correct label. In this study we present a neural-network approach that optimizes the same likelihood function as optimized by the EM algorithm. The noise is explicitly modeled by an additional softmax layer that connects the correct labels to the noisy ones. This scheme is then extended to the case where the noisy labels are dependent on the features in addition to the correct labels. Experimental results demonstrate that this approach outperforms previous methods.
["Deep learning", "Optimization"]
https://openreview.net/forum?id=H12GRgcxg
https://openreview.net/pdf?id=H12GRgcxg
https://openreview.net/forum?id=H12GRgcxg&noteId=BksCTxHEe
Published as a conference paper at ICLR 2017TRAINING DEEP NEURAL -NETWORKS USING A NOISEADAPTATION LAYERJacob Goldberger & Ehud Ben-ReuvenEngineering Faculty, Bar-Ilan University,Ramat-Gan 52900, Israeljacob.goldberger@biu.ac.il,udi.benreuven@gmail.comABSTRACTThe availability of large datsets has enabled neural networks to achieve impressiverecognition results. However, the presence of inaccurate class labels is known todeteriorate the performance of even the best classifiers in a broad range of classi-fication problems. Noisy labels also tend to be more harmful than noisy attributes.When the observed label is noisy, we can view the correct label as a latent ran-dom variable and model the noise processes by a communication channel withunknown parameters. Thus we can apply the EM algorithm to find the parametersof both the network and the noise and estimate the correct label. In this study wepresent a neural-network approach that optimizes the same likelihood function asoptimized by the EM algorithm. The noise is explicitly modeled by an additionalsoftmax layer that connects the correct labels to the noisy ones. This scheme isthen extended to the case where the noisy labels are dependent on the features inaddition to the correct labels. Experimental results demonstrate that this approachoutperforms previous methods.1 I NTRODUCTIONThe presence of class label noise inherent to training samples has been reported to deteriorate theperformance of even the best classifiers in a broad range of classification problems (Nettleton et al.(2010), Pechenizkiy et al. (2006), Zhu & Wu (2004)). Noisy labels also tend to be more harmfulthan noisy attributes (Zhu & Wu (2004)). Noisy data are usually related to the data collectionprocess. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate.However, this assumption often does not hold since labels that are provided by human judgmentsare subjective. Many of the largest image datasets have been extracted from social networks. Theseimages are labeled by non-expert users and building a consistent model based on a precisely labeledtraining set is very tedious. Mislabeling examples have been reported even in critical applicationssuch as biomedical datasets where the available data are restricted (Alon et al. (1999)). A verycommon approach to noisy datasets is to remove the suspect samples in a preprocessing stage or havethem relabeled by a data expert (Brodley & Friedl (1999)). However, these methods are not scalableand may run the risk of removing crucial examples that can impact small datasets considerably.Variants that are noise robust have been proposed for the most common classifiers such as logistic-regression and SVM (Fr ́enay & Verleysen (2014), Jakramate & Kab ́an (2012), Beigman & Klebanov(2009)). However, classifiers based on label-noise robust algorithms are still affected by label noise.From a theoretical point of view, Bartlett et al. (2006) showed that most loss functions are not com-pletely robust to label noise. Natarajan et al. (2013) proposed a generic unbiased estimator for binaryclassification with noisy labels. They developed a surrogate cost function that can be expressed bya weighted sum of the original cost functions, and provided asymptotic bounds for performance.Grandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex-treme case of noisy label data. They suggested a semi-supervised algorithm that encourages theclassifier to predict the non-labeled data with high confidence by adding a regularization term to thecost function. The problem of classification with label noise is an active research area. Comprehen-sive up-to-date reviews of both the theoretical and applied aspects of classification with label noisecan be found in Fr ́enay & Kaban (2014) and Fr ́enay & Verleysen (2014).1Published as a conference paper at ICLR 2017In spite of the huge success of deep learning there are not many studies that have explicitly attemptedto address the problem of Neural Net (NN) training using data with unreliable labels. Larsen et al.(1998) introduced a single noise parameter that can be calculated by adding a new regularizationterm and cross validation. Minh & Hinton (2012) proposed a more realistic noise model that de-pends on the true label. However, they only considered the binary classification case. Sukhbaatar& Fergus (2014) recently proposed adding a constrained linear layer at the top of the softmax layer,and showed that only under some strong assumptions can the linear layer be interpreted as the tran-sition matrix between the true and noisy (observed) labels and the softmax output layer as the trueprobabilities of the labels. Reed et al. (2014) suggested handling the unreliability of the training datalabels by maximizing the likelihood function with an additional classification entropy regularizationterm.The correct unknown label can be viewed as a hidden random variable. Hence, it is natural to applythe EM algorithm where in the E-step we estimate the true label and in the M-step we retrain thenetwork. Several variations of this paradigm have been proposed (e.g. Minh & Hinton (2012),Bekker & Goldberger (2016)). However, iterating between EM-steps and neural network trainingdoes not scale well. In this study we use latent variable probabilistic modeling but we optimize thelikelihood score function within the framework of neural networks. Current noisy label approachesassume either implicitly or explicitly that, given the correct label, the noisy label is independentof the feature vector. This assumption is probably needed to simplify the modeling and deriveapplicable learning algorithms. However, in many cases this assumption is not realistic since awrong annotation is more likely to occur in cases where the features are misleading. By contrast,our framework makes it easy to extend the proposed learning algorithm to the case where the noiseis dependent on both the correct label and the input features. In the next section we describe a modelformulation and review the EM based approach. In Section 3 we described our method which isbased on adding another softmax layer to the network and in Section 4 we present our results.2 A PROBABILISTIC FRAMEWORK FOR NOISY LABELSAssume we want to train a multi-class neural-network soft-classifier p(y=ijx;w)wherexis thefeature vector, wis the network parameter-set and iis a member of the class-set f1;:::;kg. Wefurther assume that in the training process we cannot directly observe the correct label y. Instead,we only have access to a noisy version of it denoted by z. Here we follow the probabilistic modelingand the EM learning approach described in Bekker & Goldberger (2016). In this approach noisegeneration is assumed to be independent of the features and is modeled by a parameter (i;j) =p(z=jjy=i). The noise distribution is unknown and we want to learn it as part of the trainingphase. The probability of observing a noisy label zgiven the feature vector xis:p(z=jjx;w;) =kXi=1p(z=jjy=i;)p(y=ijx;w) (1)wherekis the number of classes. The model is illustrated in the following diagram:Neural-Networkwnoisy channelx y zIn the training phase we are given nfeature vectors x1;:::;x nwith the corresponding noisy la-belsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:L(w;) =nXt=1log(kXi=1p(ztjyt=i;)p(yt=ijxt;w)) (2)Based on the training data, the goal is to find both the noise distribution and the Neural Networkparameterswthat maximize the likelihood function. Since the random variables y1;:::;y nare hid-den, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of2Published as a conference paper at ICLR 2017each EM iteration we estimate the hidden true data labels based on the noisy labels and the currentparameters:cti=p(yt=ijxt;zt;w0;0); i = 1;:::;k; t = 1;:::;n (3)wherew0and0are the current parameter estimations. In the M-step we update both the NN andthe noisy channel parameters. The updated noise distribution has a closed-form solution.(i;j) =Ptcti1fzt=jgPtcti; i;j2f1;:::;kg (4)Thekkmatrixcan be viewed as a confusion matrix between the soft estimates of the true labelfctiji= 1;:::;kgand the observed noisy labels zt. As part of the EM M-step, to find the updatedNN parameter wwe need to maximize the following function:S(w) =nXt=1kXi=1ctilogp(yt=ijxt;w) (5)which is a soft-version of the likelihood function of the fully observed case, based on the currentestimate of the true labels. The back-propagation derivatives of the function (5) that we maximizein the M-step are:@S@ui=nXt=1(p(yt=ijxt;zt;w0;0)p(yt=ijxt;w))h(xt) (6)such thathis the final hidden layer and u1;:::;u kare the parameters of the soft-max output layer.The method reviewed here is closely related to the work of Minh & Hinton (2012). They addressedthe problem of mislabeled data points in a particular type of dataset (aerial images). The maindifference is that in their approach they assumed that they do not learn the noise parameter. Insteadthey assume that the noise model can be separately tuned using a validation set or set by hand. Notethat even if the true noise parameters are given, we still need the apply the EM iterative procedure.However, this assumption makes the interaction between the E-step and the NN learning mucheasier since each time a data-point xtis visited we can compute the p(yt=ijxt;zt)based on thecurrent network parameters and the pre-defined noise parameters. Motivated by the need for modelcompression, Hinton et al. (2014) introduced an approach to learn a “distilled” model by traininga more compact neural network to reproduce the output of a larger network. Using the notationdefined above, in the second training stage they actually optimized the cost function: S(w) =Pnt=1Pki=1p(yt=ijxt;w0;0) logp(yt=i;xt;w)such thatw0is the parameter of the largernetwork that was trained using the labels z1;:::;z n,wis the parameter of the smaller network and0(i;j)in this case is a non-informative distribution (i.e. 0(i;j) = 1=k).There are several drawbacks to the EM-based approach described above. The EM algorithm isa greedy optimization procedure that is notoriously known to get stuck in local optima. Anotherpotential issue with combining neural networks and EM direction is scalability. The frameworkrequires training a neural network in each iteration of the EM algorithm. For real-world, large-scalenetworks, even a single training iteration is a non-trivial challenge. Moreover, in many domains(e.g. object recognition in images) the number of labels is very large, so many EM iterations arelikely to be needed for convergence. Another drawback of the probabilistic models is that they arebased on the simplistic assumption that the noise error is only based on the true labels but not on theinput features. In this study we propose a method for training neural networks with noisy labels thatsuccessfully addresses all these problems.3 T RAINING DEEP NEURAL NETWORKS USING A NOISE ADAPTATION LAYERIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood function(2). In this section we describe an algorithm that optimizes the same function within the frameworkof neural networks. Assume the neural network classifier we are using is based on non-linear inter-mediate layers followed by a soft-max output layer used for soft classification. Denote the non-linear3Published as a conference paper at ICLR 2017function applied on an input xbyh=h(x)and denote the soft-max layer that predicts the true ylabel by:p(y=ijx;w) =exp(u>ih+bi)Pkl=1exp(u>lh+bl); i = 1;:::;k (7)wherewis the network parameter-set (including the softmax layer). We next add another softmaxoutput layer to predict the noisy label zbased on both the true label and the input features:p(z=jjy=i;x) =exp(u>ijh+bij)Plexp(u>ilh+bil)(8)p(z=jjx) =Xip(z=jjy=i;x)p(y=ijx) (9)We can also define a simplified version where the noisy label only depends on the true label; i.e. weassume that labels flips are independent of x:p(z=jjy=i) =exp(bij)Plexp(bil)(10)p(z=jjx) =Xip(z=jjy=i)p(y=ijx) (11)We denote the two noise modeling variants as the complex model (c-model) (8) and the simplemodel (s-model) (10). Hereafter we use the notation wnoisefor all the parameters of the secondsoftmax layer which can be viewed as a noise adaptation layer.In the training phase we are given nfeature vectors x1;:::;x nwith corresponding noisy labelsz1;:::;z nwhich are viewed as noisy versions of the correct hidden labels y1;:::;y n. The log-likelihood of the model parameters is:S(w;w noise) =Xtlogp(ztjxt) =Xtlog(Xip(ztjyt=i;xt;wnoise)p(yt=ijxt;w)) (12)Since the noise is modeled by adding another layer to the network, the score S(w;w noise)can beoptimized using standard techniques for neural network training. By settingp(z=jjy=i) =(i;j) =exp(bij)Plexp(bil); (13)it can easily verified that, by using either the EM algorithm (2) or the s-model neural networkscheme (12), we are actually optimizing exactly the same function. Thus the neural network withthe s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm.Instead of alternating between optimizing the noisy model and the network classifier, we considerthem as components of the same network and optimize them simultaneously.non-linear functionwsoft-maxwsoft-maxwnoisex h h, y znon-linear functionwsoft-maxwx h yFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above)and test phase (below).4Published as a conference paper at ICLR 2017Note that in the c-model, where the noise is also dependent on the input features, we can still applythe EM algorithm to learn the parameters of the additional noise layer. However, there is no closed-form solution in the M-step for the optimal parameters and we need to apply neural-network trainingin the M-step to find the noise-layer parameters.At test time we want to predict the true labels. Hence, we remove the last softmax layer that aims toget rid of the noise in the training set. We compute the true-label softmax estimation p(y=ijx;w)(7). The proposed architecture for training the neural network based on training data with noisylabels is illustrated in Figure 1.There are degrees of freedom in the two softmax layer model. Hence, a careful initialization of theparameters of the noise adaptation layer is crucial for successful convergence of the network intoa good classifier of the correct labels at test time. We used the parameters of the original networkto initialize the parameters of the s-model network that contains the noise adaptation level. We caninitialize the softmax parameters of the s-model by assuming a small uniform noise:bij= log((1)1fi=jg+k11fi6=jg)such thatkis the number of different classes. A better procedure is to first train the original NNwithout the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat thelabels produced by the NN as the true labels and compute the confusion matrix on the train set andused it as an initial value for the bias parameters:bij= log(Pt1fzt=jgp(yt=ijxt)Ptp(yt=ijxt))such thatx1;:::;x nare the feature vectors of the training dataset and z1;:::;z nare the correspondingnoisy labels. So far we have concentrated on parameter initialization for the s-model. The strategythat works best to initialize the c-model parameters is to use the parameters that were optimized forthe s-model. In other words we set linear terms uijto zero and initialize the bias terms bijwith thevalues that were optimized by the s-model.The computational complexity of the proposed method is quadratic in the size of the class-set. Sup-pose there are kclasses to predict, in this case the proposed methods require k+1sets of softmaxoperations with a size of keach. Hence there are scalability problems when the class set is large. Aswe explained in the previous paragraph, we initialized the second soft-max layer using the confusionmatrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assumethe rows of the matrix correspond to the true labels and the matrix columns correspond to the noisylabels. Thellargest elements in the i-th row are the most frequent noisy class values when the trueclass value is i. We can thus connect the i-th element in the first softmax layer only to its lmostprobable noisy class candidates. Note that if we connect the i-th label in the first softmax only to thei-th label in the second softmax layer, the second softmax layer collapses to identity and we obtainthe standard baseline model. Taking the lmost likely connections to the second softmax layer, weallow an additional l1possible noisy labels for each correct label. We thus obtain a data drivensparsifying of the second softmax layer which solves the scalability problem since the complexitybecomes linear in the number of classes instead of quadratic. In the experiment section we showthat by using this approach there is not much deference in performance.Our architecture, which is based on a concatenation of softmax layers, resembles the hierarchicalsoftmax approach Morin & Bengio (2005) that replaces the flat softmax layer with a hierarchicallayer that has the classes as leaves. This allowed them to decompose calculating the probabilityof the class into a sequence of probability calculations, which saves us from having to calculatethe expensive normalization over all classes. The main difference between our approach and theirs(apart from the motivation) is that in our approach the true-label softmax layer is fully connectedto the noisy-label layer. Sukhbaatar & Fergus (2014) suggested adding a linear layer to handlenoisy labels. Their approach is similar to our s-model. In their approach, however, they proposed adifferent learning procedure.4 E XPERIMENTSIn this section, we evaluate the robustness of deep learning to training data with noisy labels withand without explicit noise modeling. We first show results on the MNIST data-set with injected label5Published as a conference paper at ICLR 2017(a) 20% dataset (b) 50% dataset(c) 100% datasetFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level.The results are shown for several training data sizes (20%,50%,100%) of the training subset.noise in our experiments. The MNIST is a database of handwritten digits, which consists of 2828images. The dataset has 60k images for training and 10k images for testing. We used a two hiddenlayer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU andwe used dropout with parameter 0.5. We trained the network using the Adam optimizer (Kingma& Ba (2014)) with default parameters, which we found to converge more quickly and effectivelythan SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experimentsdescribed below. In addition to a network that is based on fully connected layers, we also applied anetwork based on a CNN architecture. The results we obtained in the two architectures were similar.The network we implemented is publicly available1.We generated noisy data from clean data by stochastically changing some of the labels. We con-verted each label with probability pto a different label according to a predefined permutation. Weused the same permutation as in Reed et al. (2014). The labels of the test data remained, of course,unperturbed to validate and compare our method to the regular approach.We compared the proposed noise robust models to other model training strategies. The first networkwas the baseline approach that ignores the fact that the labels of the training data are unreliable.Denote the observed noisy label by zand the softmax decision by q1;:::;q k. The baseline log-likelihood score (for a single input) is:S=Xi1fz=iglog(qi)1code available at https://github.com/udibr/noisy_labels6Published as a conference paper at ICLR 2017Figure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results are shown for several training data sizes (20%,50%,100%) of the training subsetfor a CNN network architecture).We also implemented two variants of the noise robust approach proposed by Reed et al. (2014).They suggested a soft versionS(1)H(q) =Xi1fz=iglog(qi) + (1)Xiqilog(qi)and a hard version:S+ (1) maxilog(qi)In their experiments they took = 0:8for the hard version and = 0:95for the soft version, andobserved that the hard version provided better results. Finally we implemented the two variants ofour approach; namely, the noise modeling based only on the labels (s-model) and the noise modelingthat was also based on the features (c-model).Figure 2 depicts the comparative test errors results as a function of the fractions of noise. The resultsare shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST trainingsubset. Bootstrapping was used to compute confidence intervals around the mean. For 1000 times,N= 10 samples were randomly drawn with repeats from the Navailable samples and mean wascomputed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.The results show that all the methods that are explicitly aware of the noise in the labels are betterthan the baseline which is the standard training approach. We revalidated the results reported in Reedet al. (2014) and showed that the hard version of their method performs better than the soft version.In all cases our models performed better than the alternatives. In most cases the c-model was betterthan the s-model. In the case where the entire dataset was used for training, we can see from theresults that there was a phase transition phenomenon. We obtained almost perfect classificationresults until the noise level was high and there was a sudden strong performance drop. Analyzingwhy this effect occurred is left for future research.We next show the results on the CIFAR-100 image dataset Krizhevsky & Hinton (2009) which con-sists of 3232color images arranged in 100 classes containing 600 images each. There are 500training images and 100 testing images per class. We used raw images directly without any pre-processing or augmentation. We generated noisy data from clean data by stochastically changingsome of the labels. We converted each one of the 100 labels with probability pto a different labelaccording to a predefined permutation. The labels of the test data remained, of course, unperturbedto validate and compare our method to the regular approach. We used a CNN network with twoconvolutional layers combined with ReLU activation and max-pooling, followed by two fully con-nected layers. Figure 3 depicts the comparative test errors results as a function of the fractionsof noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 training7Published as a conference paper at ICLR 2017Figure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noiselevel. The results of regular and sparse second softmax layers are shown for several training datasizes (20%,50%,100%) of the training subset .subset. Bootstrapping was used to compute confidence intervals around the mean in the same wayas for the MNIST experiment. The results showed that the proposed method works better than thealternatives. The simple model consistently provided the best results but when the noise level wasvery high the complex method tended to perform better.We next report experimental results for the sparse variant of our method that remains efficient evenwhen the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consistsof 100 possible classes. For each class we only took the five most probable classes in the confusionmatrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure 4,sparsifying the second softmax layer did not not result in a drop in performance5 C ONCLUSIONIn this paper we investigated the problem of training neural networks that are robust to label noise.We proposed an algorithm for training neural networks based solely on noisy data where the noisedistribution is unknown. We showed that we can reliably learn the noise distribution from the noisydata without using any clean data which, in many cases, are not available. The algorithm can beeasily combined with any existing deep learning implementation by simply adding another softmaxoutput layer. Our results encourage collecting more data at a cheaper price, since mistaken datalabels can be less harmful to performance. One possible future research direction would be togeneralize our learning scheme to cases where both the features and the labels are noisy. We showedresults on datasets with small and medium sized class-sets. Future research direction would be toevaluate the performance and efficiency of the proposed method on tasks with large class-sets.ACKNOWLEDGMENTSThis work is supported by the Intel Collaborative Research Institute for Computational Intelligence(ICRI-CI).REFERENCESU. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns ofgene expression revealed by clustering analysis of tumor and normal colon tissues probed byoligonucleotide arrays. Proceedings of the National Academy of Sciences , 96(12):6745–6750,1999.P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journalof the American Statistical Association , pp. 138–156, 2006.E. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL-IJCNLP , 2009.8Published as a conference paper at ICLR 2017A. Bekker and J. Goldberger. Training deep neural-networks based on unreliable labels. In IEEEInt.l Conference on Acoustics, Speech and Signal Processing (ICASSP) , pp. 2682–2686, 2016.C. Brodley and M. Friedl. Identifying mislabeled training data. J. Artif. Intell. Res.(JAIR) , 11:131–167, 1999.B. Fr ́enay and A. Kaban. A comprehensive introduction to label noise. In European Symposium onArtificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) , 2014.B. Fr ́enay and M. Verleysen. Classification in the presence of label noise: a survey. IEEE Trans. onNeural Networks and Learning Systems , 25(5):845–869, 2014.Y . Grandvalet and Y . Bengio. Semi-supervised learning by entropy minimization. In Advances inNeural Information Processing Systems (NIPS) , 2005.G.E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In NIPS DeepLearning and Representation Learning Workshop , 2014.B. Jakramate and A. Kab ́an. Label-noise robust logistic regression and its applications. In MachineLearning and Knowledge Discovery in Databases , pp. 143–158. 2012.D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technicalreport, Computer Science Department, University of Toronto, 2009.J. Larsen, L. Nonboe, M. Hintz-Madsen, and K. L. Hansen. Design of robust neural network classi-fiers. In Int. Conf. on Acoustics, Speech and Signal Processing , pp. 1205–1208, 1998.V . Minh and G. Hinton. Learning to label aerial images from noisy data. In Int. Conf. on MachineLearning (ICML) , 2012.F. Morin and Y . Bengio. Hierarchical probabilistic neural network language model. In Aistats ,volume 5, pp. 246–252, 2005.N. Natarajan, I. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances inNeural Information Processing Systems (NIPS) , 2013.D. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise onthe precision of supervised learning techniques. Artificial intelligence review , 2010.M. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn-ing in medical domains: The effect of feature extraction. In Computer-Based Medical Systems(CBMS) , 2006.S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neuralnetworks on noisy labels with bootstrapping. In arXiv preprint arXiv:1412.6596 , 2014.S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXivpreprint arXiv:1406.2080 , 2014.X. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial IntelligenceReview , 22(3):177–210, 2004.9
BJWfYjWVe
Sywh5KYex
ICLR.cc/2017/conference/-/paper113/official/review
{"title": "Interesting approach for optimizing network architecture ", "rating": "6: Marginally above acceptance threshold", "review": "The paper presents a layer architecture where a single parameter is used to gate the output response of layer to amplify or suppress it. It is shown that such an architecture can ease optimization of a deep network as it is easy to learn identity mappings in layers helping in better gradient propagation to lower layers (better supervision). \n\nUsing an introduced SDI metric it shown that gated residual networks can most easily learn identity mappings compared to other architectures. \n\nAlthough good theoretical reasoning is presented the observed experimental evidence of learned k values does not seem to strongly support the theory given that learned k values are mostly very small and not varying much across layers. Also, experimental validation of the approach is not quite strong in terms of reported performances and number of large scale experiments.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning Identity Mappings with Residual Gates
["Pedro H. P. Savarese", "Leonardo O. Mazza", "Daniel R. Figueiredo"]
We propose a layer augmentation technique that adds shortcut connections with a linear gating mechanism, and can be applied to almost any network model. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Highway Neural Networks and Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. In our experiments, augmented plain networks -- which can be interpreted as simplified Highway Neural Networks -- outperform ResNets, raising new questions on how shortcut connections should be designed. We also evaluate our model on CIFAR-10 and CIFAR-100 using augmented Wide ResNets, achieving 3.65% and 18.27% test error, respectively.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Sywh5KYex
https://openreview.net/pdf?id=Sywh5KYex
https://openreview.net/forum?id=Sywh5KYex&noteId=BJWfYjWVe
Under review as a conference paper at ICLR 2017LEARNING IDENTITY MAPPINGS WITH RESIDUALGATESPedro H. P. SavareseCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazilsavarese@land.ufrj.brLeonardo O. MazzaPoliFederal University of Rio de JaneiroRio de Janeiro, Brazilleonardomazza@poli.ufrj.brDaniel R. FigueiredoCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazildaniel@land.ufrj.brABSTRACTWe propose a layer augmentation technique that adds shortcut connections witha linear gating mechanism, and can be applied to almost any network model. Byusing a scalar parameter to control each gate, we provide a way to learn identitymappings by optimizing only one parameter. We build upon the motivation behindHighway Neural Networks and Residual Networks, where a layer is reformulatedin order to make learning identity mappings less problematic to the optimizer. Theaugmentation introduces only one extra parameter per layer, and provides easieroptimization by making degeneration into identity mappings simpler. Experimen-tal results show that augmenting layers provides better optimization, increasedperformance, and more layer independence. We evaluate our method on MNISTusing fully-connected networks, showing empirical indications that our augmen-tation facilitates the optimization of deep models, and that it provides high toler-ance to full layer removal: the model retains over 90% of its performance evenafter half of its layers have been randomly removed. In our experiments, aug-mented plain networks – which can be interpreted as simplified Highway NeuralNetworks – perform similarly to ResNets, raising new questions on how shortcutconnections should be designed. We also evaluate our model on CIFAR-10 andCIFAR-100 using augmented Wide ResNets, achieving 3:65% and18:27% testerror, respectively.1 I NTRODUCTIONAs the number of layers of neural networks increase, effectively training its parameters becomesa fundamental problem (Larochelle et al. (2009)). Many obstacles challenge the training of neuralnetworks, including vanishing/exploding gradients (Bengio et al. (1994)), saturating activation func-tions (Xu et al. (2016)) and poor weight initialization (Glorot & Bengio (2010)). Techniques such asunsupervised pre-training (Bengio et al. (2007)), non-saturating activation functions (Nair & Hinton(2010)) and normalization (Ioffe & Szegedy (2015)) target these issues and enable the training ofdeeper networks. However, stacking more than a dozen layers still lead to a hard to train model.Recently, models such as Residual Networks (He et al. (2015b)) and Highway Neural Networks(Srivastava et al. (2015)) permitted the design of networks with hundreds of layers. A key idea ofthese models is to allow for information to flow more freely through the layers, by using shortcutconnections between the layer’s input and output. This layer design greatly facilitates training,due to shorter paths between the lower layers and the network’s error function. In particular, thesemodels can more easily learn identity mappings in the layers, thus allowing the network to be deeper1Under review as a conference paper at ICLR 2017and learn more abstract representations (Bengio et al. (2012)). Such networks have been highlysuccessful in many computer vision tasks.On the theoretical side, it is suggested that depth contributes exponentially more to the represen-tational capacity of networks than width (Eldan & Shamir (2015) Telgarsky (2016) Bianchini &Scarselli (2014) Mont ́ufar et al. (2014)). This agrees with the increasing depth of winning architec-tures on challenges such as ImageNet (He et al. (2015b) Szegedy et al. (2014)).Increasing the depth of networks significantly increases its representational capacity and conse-quently its performance, an observation supported by theory (Eldan & Shamir (2015) Telgarsky(2016) Bianchini & Scarselli (2014) Mont ́ufar et al. (2014)) and practice (He et al. (2015b) Szegedyet al. (2014)). Moreover, He et al. (2015b) showed that, by construction, one can increase a net-work’s depth while preserving its performance. These two observations suggest that it suffices tostack more layers to a network in order to increase its performance. However, this behavior is notobserved in practice even with recently proposed models, in part due to the challenge of trainingever deeper networks.In this work we aim to improve the training of deep networks by proposing a layer augmentationthat builds on the idea of using shortcut connections, such as in Residual Networks and HighwayNeural Networks. The key idea is to facilitate the learning of identity mappings by introducing ashortcut connection with a linear gating mechanism , as illustrated in Figure 1. Note that the shortcutconnection is controlled by a gate that is parameterized with a scalar, k. This is a key differencefrom Highway Networks, where a tensor is used to regulate the shortcut connection, along with theincoming data. The idea of using a scalar is simple: it is easier to learn k= 0than to learn Wg= 0for a weight tensor Wgcontrolling the gate. Indeed, this single scalar allows for stronger supervisionon lower layers, by making gradients flow more smoothly in the optimization.x),(Wxfu)(kg1x),(WxfuFigure 1: Gating mechanism applied to the shortcut connection of a layer. The key difference withHighway Networks is that only a scalar kis used to regulate the gates instead of a tensor.We apply our proposed layer re-design to plain and residual layers, with the latter illustrated inFigure 2. Note that when augmenting a residual layer it becomes simply u=g(k)fr(x; W) +x,where frdenotes the layer’s residual function. Thus, the shortcut connection allows the input toflow freely without any interference of g(k)through the layer. In the next sections we will callaugmented plain networks (illustrated in Figure 1) Gated Plain Network and augmented residualnetworks (illustrated in Figure 2) Gated Residual Network, or GResNet. Again, note that in bothcases learning identity mappings is much easier in comparison to the original models.Note that layers that degenerated into identity mappings have no impact in the signal propagatingthrough the network, and thus can be removed without affecting performance. The removal of suchlayers can be seen as a transposed application of sparse encoding (Glorot et al. (2011)): transposingthe sparsity from neurons to layers provides a form to prune them entirely from the network. In-2Under review as a conference paper at ICLR 2017x),(Wxfru)(kgx),(Wxfru)(kg1),(WxfFigure 2: Proposed network design applied to Residual Networks. Note that the joint network designresults in a shortcut path where the input remains unchanged. In this case, g(k)can be interpretedas an amplifier or suppressor for the residual fr(x; W).deed, we show that performance decays slowly in GResNets when layers are removed, even whencompared to ResNets.We evaluate the performance of the proposed design in two experiments. First, we evaluate fully-connected Gated PlainNets and Gated ResNets on MNIST and compare them with their non-augmented counterparts, showing superior performance and robustness to layer removal. Second,we apply our layer re-design to Wide ResNets (Zagoruyko & Komodakis (2016)) and test its perfor-mance on CIFAR, obtaining results that are superior to all previously published results (to the bestof our knowledge). These findings indicate that learning identity mappings is a fundamental aspectof learning in deep networks, and designing models where this is easier seems highly effective.2 A UGMENTATION WITH RESIDUAL GATES2.1 T HEORETICAL INTUITIONRecall that a network’s depth can always be increased without affecting its performance – it sufficesto add layers that perform identity mappings. Consider a plain fully-connected ReLU network withlayers defined as u=ReLU (hx; Wi). When adding a new layer, if we initialize Wto the identitymatrix I, we have:u=ReLU (hx; Ii) =ReLU (x) =xThe last step holds since xis an output of a previous ReLU layer, and ReLU (ReLU (x)) =ReLU (x). Thus, adding more layers should only improve performance. However, how can a net-work with more layers learn to yield performance superior than a network with less layers? A keyobservation is that if learning identity mapping is easy, then the network with more layers is morelikely to yield superior performance, as it can more easily recover the performance of a smallernetwork through identity mappings.Figure 3: A network can have layers added to it without losing performance. Initially, a network hasmReLU layers with parameters fW1; : : : ; W mg. A new, (m+1)-th layer is added with Wm+1=I.This new layer will perform an identity mapping, therefore the two models are equivalent.The layer design of Highway Neural Networks and Residual Networks allows for deeper models tobe trained due to their shortcut connections. Note that in ResNets the identity mapping is learned3Under review as a conference paper at ICLR 2017when W= 0instead of W=I. Similarly, a Highway layer can degenerate into an identity mappingwhen the gating term T(x; W T)equals zero for all data points. Since learning identity mappingsin Highway Neural Networks strongly depends on the choice of the trasnform function T(and isnon-trivial when Tis the sigmoid function, since T1(0)is not defined) we will focus our analysison ResNets due to their simplicity. Considering a residual layer u=ReLU (hx; Wi) +x, we have:u=ReLU (hx;0i) +x=ReLU (0) + x=xIntuitively, residual layers can degenerate into identity mappings more effectively since learning anall-zero matrix is easier than learning the identity matrix. To support this argument, consider weightparameters randomly initialized with zero mean. Hence, the point W= 0 is located exactly in thecenter of the probability mass distribution used to initialize the weights.Recent work (Zhang et al. (2016)) suggests that the L2 norm of a critical point is an important factorregarding how easily the optimizer will reach it. More specifically, residual layers can be interpretedas a translation of the parameter set W=ItoW= 0, which is more accessible in the optimizationprocess due to its inferior L2 norm.However, assuming that residual layers can trivially learn the parameter set W= 0 implies ignor-ing the randomness when initializing the weights. We demonstrate this by calculating the expectedcomponent-wise distance between Woand the origin. Here, Wodenotes the weight tensor after ini-tialization and prior to any optimization. Note that the distance between Woand the origin capturesthe effort for a network to learn identity mappings:Eh(Wo0)2i=EhW2oi=V arhWoiNote that the distance is given by the distribution’s variance, and there is no reason to assume it to benegligible. Additionally, the fact that Residual Networks still suffer from optimization issues causedby depth (Huang et al. (2016a)) further supports this claim.Some initialization schemes propose a variance in the order of O(1n)(Glorot & Bengio (2010), Heet al. (2015a)), however this represents the distance for each individual parameter in W. For tensorswithO(n2)parameters, the total distance – either absolute or Euclidean – between Woand theorigin will be in the order of O(n).2.2 R ESIDUAL GATESAs previously mentioned, the key contribution in this work is the proposal of a layer augmentationtechnique where learning a single scalar parameter suffices in order for the layer to degenerate into anidentity mapping, thus making optimization easier for increased depths. As in Highway Networks,we propose the addition of gated shortcut connections. Our gates, however, are parameterized bya single scalar value, being easier to analyze and learn. For layers augmented with our technique,the effort required to learn identity mappings does not depend on any parameter, such as the layerwidth, in sharp contrast to prior models.Our design is as follows: a layer u=f(x; W)becomes u=g(k)f(x; W)+(1g(k))x, where kisa scalar parameter. This design is illustrated in Figure 1. Note that such layer can quickly degenerateby setting g(k)to0. Using the ReLU activation function as g, it suffices that k0forg(k) = 0 .By adding an extra parameter, the dimensionality of the cost surface also grows by one. This newdimension, however, can be easily understood due to the specific nature of the layer reformulation.The original surface is maintained on the k= 1slice, since the gated model becomes equivalent tothe original one. On the k= 0 slice we have an identity mapping, and the associated cost for allpoints in such slice is the same cost associated with the point fk= 1; W=Ig: this follows sinceboth parameter configurations correspond to identity mappings, therefore being equivalent. Lastly,due to the linear nature of g(k)and consequently of the gates, all other slices k6= 0; k6= 1will be alinear combination between the slices k= 0andk= 1.In addition to augmenting plain layers, we also apply our technique to residual layers. Althoughit might sound counterintuitive to add residual gates to a residual layer, we can see in Figure 2that our augmentation provides ResNets means to regulate the residuals, therefore a linear gating4Under review as a conference paper at ICLR 2017mechanism might not only allow deeper models, but could also improve performance. Having theoriginal design of a residual layer as:u=f(x; W) =fr(x; W) +xwhere fr(x; W)is the layer’s residual function – in our case, BN-ReLU-Conv-BN-ReLU-Conv .Our approach changes this layer by adding a linear gate, yielding:u=g(k)f(x; W) + (1g(k))x=g(k)(fr(x; W) +x) + (1g(k))x=g(k)fr(x; W) +xThe resulting layer maintains the shortcut connection unaltered, which according to He et al. (2016)is a desired property when designing residual blocks. As (1g(k))vanishes from the formulation,g(k)stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Notethat this model introduces a single scalar parameter per layer block. This new dimension can beinterpreted as discussed above, except that the slice k= 0 is equivalent tofk= 1; W= 0g, sincean identity mapping is learned when W= 0in ResNets.3 E XPERIMENTSAll models were implemented on Keras (Chollet (2015)) or on Torch (Collobert et al. (2011)), andwere executed on a Geforce GTX 1070. Larger models or more complex datasets, such as theImageNet (Russakovsky et al. (2015)), were not explored due to hardware limitations.3.1 MNISTThe MNIST dataset (Lecun et al. (1998)) is composed of 60;000greyscale images with 2828pixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained four typesof fully-connected models: classical plain networks, ResNets, Gated Plain networks and GatedResNets.The networks consist of a linear layer with 50 neurons, followed by dlayers with 50 neurons each,and lastly a softmax layer for classification. Only the dmiddle layers differ between the four archi-tectures – the first linear layer and the softmax layer are the same in all experiments.For plain networks, each layer performs dot product, followed by Batch Normalization and a ReLUactivation function.Initial tests with pre-activations (He et al. (2016)) resulted in poor performance on the validationset, therefore we opted for the traditional Dot-BN-ReLU layer when designing Residual Networks.Each residual block consists of two layers, as conventional.All networks were trained using Adam (Kingma & Ba (2014)) with Nesterov momentum (Dozat)for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we keptthe learning rate and momentum fixed to 0:002and0:9during the whole training.For preprocessing, we divided each pixel value by 255, normalizing their values to [0;1].The training curves for plain networks, Gated PlainNets, ResNets and Gated ResNets with varyingdepth are shown in Figure 4. The distance between the curves increase with the depth, showing thatthe augmentation helps the training of deeper models.Table 1 shows the test error for each depth and architecture. Augmented models perform better inall settings when compared to the original ones, and the performance boost is more noticeable withincreased depths. Interestingly, Gated PlainNets performed better than ResNets, suggesting that thereason for Highway Neural Networks to underperform ResNets might be due to an overly complexgating mechanism.5Under review as a conference paper at ICLR 2017Figure 4: Train loss for plain and residual networks, along with their augmented counterparts, withd=f2;10;20;50;100g. As the models get deeper, the error reduction due to the augmentationincreases.Depth = d+ 2 Plain ResNet Gated PlainNet Gated ResNetd= 2 2.29 2.20 2.04 2.17d= 10 2.22 1.64 1.78 1.60d= 20 2.21 1.61 1.59 1.57d= 50 60.37 1.62 1.36 1.48d= 100 90.20 1.50 1.29 1.26Table 1: Test error (%) on the MNIST dataset for fully-connected networks. Augmented modelsoutperform their original counterparts in all experiments. Non-augmented plain networks performworse and fail to converge for d= 50 andd= 100 .Depth = d+ 2 Gated PlainNet Gated ResNetd= 2 10.57 5.58d= 10 1.19 2.54d= 20 0.64 1.73d= 50 0.46 1.04d= 100 0.41 0.67Table 2: Mean kfor increasingly deep Gated PlainNets and Gated ResNets.As observed in Table 2, the mean values of kdecrease as the model gets deeper, showing thatshortcut connections have less impact on shallow networks. This agrees with empirical results thatResNets perform better than classical plain networks as the depth increases. Note that the significantdifference between mean values for kin Gated PlainNets and Gated ResNets has an intuitive expla-nation: in order to suppress the residual signal against the shortcut connection, Gated PlainNetsrequire that k <0:5(otherwise the residual signal will be enhanced). Conversely, Gated ResNetssuppress the residual signal when k <1:0, and enhance it otherwise.We also analyzed how layer removal affects ResNets and Gated ResNets. We compared how thedeepest networks ( d= 100 ) behave as residual blocks composed of 2 layers are completely removedfrom the models. The final values for each kparameter, according to its corresponding residualblock, is shown in Figure 5. We can observe that layers close to the middle of the network have asmaller kthan these in the beginning or the end. Therefore, the middle layers have less importanceby due to being closer to identity mappings.6Under review as a conference paper at ICLR 2017Figure 5: Left: Values for kaccording to ascending order of residual blocks. The first block,consisted of the first two layers of the network, has index 1, while the last block – right before thesoftmax layer – has index 50. Right: Test accuracy (%) according to the number of removed layers.Gated Residual Networks are more robust to layer removal, and maintain decent results even afterhalf of the layers have been removed.Results are shown in Figure 5. For Gated Residual Networks, we prune pairs of layers followingtwo strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest kare removed first. In the other we remove blocks randomly. We present results using both strategiesfor Gated ResNets, and only random pruning for ResNets since they lack the kparameter.The greedy strategy is slightly better for Gated Residual Networks, showing that the kparameteris indeed a good indicator of a layer’s importance for the model, but that layers tend to assume thesame level of significance. In a fair comparison, where both models are pruned randomly, GatedResNets retain a satisfactory performance even after half of its layers have been removed, whileResNets suffer performance decrease after just a few layers.Therefore augmented models are not only more robust to layer removal, but can have a fair shareof their layers pruned and still perform well. Faster predictions can be generated by using a prunedversion of an original model.3.2 CIFARThe CIFAR datasets (Krizhevsky (2009)) consists of 60;000color images with 3232pixels each.CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100dataset is composed of the same number of images, however with a total of 100 classes.Residual Networks have surpassed state-of-the-art results on CIFAR. We test Gated ResNets, WideGated ResNets (Zagoruyko & Komodakis (2016)) and compare them with their original, non-augmented models.For pre-activation ResNets, as described in He et al. (2016), we follow the original implementationdetails. We set an initial learning rate of 0.1, and decrease it by a factor of 10 after 50% and75% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre-processing consists of mean subtraction. Weight decay of 0.0001 is used for regularization, andBatch Normalization’s momentum is set to 0.9.We follow the implementation from Zagoruyko & Komodakis (2016) for Wide ResNets. The learn-ing rate is initialized as 0.1, and decreases by a factor of 5 after 30%, 60% and 80% epochs. Imagesare mean/std normalized, and a weight decay of 0.0005 is used for regularization. We also apply 0.3dropout (Srivastava et al. (2014)) between convolutions, whenever specified. All other details arethe same as for ResNets.7Under review as a conference paper at ICLR 2017For both architectures we use moderate data augmentation: images are padded with 4 pixels, and wetake random crops of size 3232during training. Additionally, each image is horizontally flippedwith50% probability. We use batch size 128 for all experiments.For all gated networks, we initialize kwith a constant value of 1. One crucial question is whetherweight decay should be applied to the kparameters. We call this ” kdecay”, and also compare GatedResNets and Wide Gated ResNets when it is applied with the same magnitude of the weight decay:0.0001 for Gated ResNet and 0.0005 for Wide Gated ResNet.Model Original Gated Gated ( kdecay)Resnet 5 7.16 6.67 7.04Wide ResNet (4,10) + Dropout 3.89 3.65 3.74Table 3: Test error (%) on the CIFAR-10 dataset, for ResNets, Wide ResNets and their augmentedcounterparts. kdecay is when weight decay is also applied to the kparameters in an augmentednetwork. Results for the original models are as reported in He et al. (2015b) and Zagoruyko &Komodakis (2016).Table 3 shows the test error for two architectures: a ResNet with n= 5, and a Wide ResNet withn= 4,n= 10 . Augmenting each model adds 15 and 12 parameters, respectively. We observe thatkdecay hurts performance in both cases, indicating that they should either remain unregularized orsuffer a more subtle regularization compared to the weight parameters. Due to its direct connectionto layer degeneration, regularizing kresults in enforcing identity mappings, which might harm themodel.Due to the indications that a regularization on the kparameter results in a negative impact on themodel’s performance, we proceed to test other models – having different depths and widening fac-tors – with the goal of evaluating the effectiveness of our proposed augmentation. Tables 4 and 5show that augmented Wide ResNets outperform the original models without changing any hyperpa-rameter, both on CIFAR-10 and CIFAR-100.Model Original GatedWide ResNet (2,4) 5.02 4.66Wide ResNet (4,10) 4.00 3.82Wide ResNet (4,10) + Dropout 3.89 3.65Wide ResNet (8,1) 6.43 6.10Wide ResNet (6,10) + Dropout 3.80 3.63Table 4: Test error (%) on the CIFAR-10 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).Model Original GatedWide ResNet (2,4) 24.03 23.29Wide ResNet (4,10) 19.25 18.89Wide ResNet (4,10) + Dropout 18.85 18.27Wide ResNet (8,1) 29.89 28.20Table 5: Test error (%) on the CIFAR-100 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).As in the previous experiment, in Figure 6 we present the final kvalues for each block, in this case ofthe Wide ResNet (4,10) on CIFAR-10. We can observe that the kvalues follow an intriguing pattern:the lowest values are for the blocks of index 1,5and9, which are exactly the ones that increase thefeature map dimension. This indicates that, in such residual blocks, the convolution performedin the shortcut connection to increase dimension is more important than the residual block itself.Additionally, the peak value for the last residual block suggests that its shortcut connection is oflittle importance, and could as well be fully removed without greatly impacting the model.Figure 7 shows the loss curves for Gated Wide ResNet (4,10) + Dropout, both on CIFAR-10 andCIFAR-100. The optimization behaves similarly to the original model, suggesting that the gates do8Under review as a conference paper at ICLR 2017Figure 6: Values for kaccording to ascending order of residual blocks. The first block, consisted ofthe first two layers of the network, has index 1, while the last block – right before the softmax layer– has index 12.Figure 7: Training and test curves for the Wide ResNet (4,10) with 0.3 dropout, showing error (%)on training and test sets. Dashed lines represent training error, whereas solid lines represent testerror.not have any side effects on the network. The performance gains presented on Table 4 point that,however predictable and extremely simple, our augmentation technique is powerful enough to aidon the optimization of state-of-the-art models.Results of different models on the CIFAR datasets are shown in Table 6. The training and test errorsare presented in Figure 7. To the authors’ knowledge, those are the best results on CIFAR-10 andCIFAR-100 with moderate data augmentation – only random flips and translations.3.3 I NTERPRETATIONGreff et al. (2016) showed how Residual and Highway layers can be interpreted as performingiterative refinements on learned representations. In this view, there is a connection on a layer’slearned parameters and the level of refinement applied on its input: for Highway Neural Networks,T(x)having components close to 1results in a layer that generates completely new representations.As seen before, components close to 0result in an identity mapping, meaning that the representationsare not refined at all.9Under review as a conference paper at ICLR 2017Method Params C10+ C100+Network in Network (Lin et al. (2013)) - 8.81 -FitNet (Romero et al. (2014)) - 8.39 35.04Highway Neural Network (Srivastava et al. (2015)) 2.3M 7.76 32.39All-CNN (Springenberg et al. (2014)) - 7.25 33.71ResNet-110 (He et al. (2015b)) 1.7M 6.61 -ResNet in ResNet (Targ et al. (2016)) 1.7M 5.01 22.90Stochastic Depth (Huang et al. (2016a)) 10.2M 4.91 -ResNet-1001 (He et al. (2016)) 10.2M 4.62 22.71FractalNet (Larsson et al. (2016)) 38.6M 4.60 23.73Wide ResNet (4,10) (Zagoruyko & Komodakis (2016)) 36.5M 3.89 18.85DenseNet (Huang et al. (2016b)) 27.2M 3.74 19.25Wide GatedResNet (4,10) + Dropout 36.5M 3.65 18.27Table 6: Test error (%) on the CIFAR-10 and CIFAR-100 dataset. All results are with standard dataaugmentation (crops and flips).However, the dependency of T(x)on the incoming data makes it difficult to analyze the level of re-finement performed by a layer given its parameters. This is more clearly observed once we considerhow each component of T(x)is a function not only on the parameter set WT, but also on x.In particular, given the mapping performed by a layer, we can estimate how much more abstractits representations are compared to the inputs. For our technique, this estimation can be done byobserving the kparameter of the corresponding layer: in Gated PlainNets, k= 0 corresponds toan identity mapping, and therefore there is no modification on the learned representations. Fork= 1, the shortcut connection is ignored and therefore a jump in the representation’s complexity isobserved.For Gated ResNets, the shortcut connection is never completely ignored in the generation of output.However, we can see that as kgrows to infinity the shortcut connection’s contribution goes to zero,and the learned representation becomes more abstract compared to the layer’s inputs.Table 6 shows how the layers that change the data dimensionality learn more abstract representationscompared to dimensionality-preserving layers, which agrees with Greff et al. (2016). The last layer’skvalue, which is the biggest among the whole model, indicates a severe jump in the abstraction ofits representation, and is intuitive once we see the model as being composed of two main stages: aconvolutional one and a fully-connected one, specific for classification.Finally, Table 2 shows that the abstraction jumps decrease as the model grows deeper and the per-formance increases. This agrees with the idea that depth allows for more refined representations tobe learned. We believe that an extensive analysis on the rate that these measures – depth, abstractionjumps and performance – interact with each other could bring further understanding on the practicalbenefits of depth in networks.4 C ONCLUSIONWe have proposed a novel layer augmentation technique that facilitates the optimization of deepnetworks by making identity mappings easy to learn. Unlike previous models, layers augmented byour technique require optimizing only one parameter to degenerate into identity, and by designingour method such that randomly initialized parameter sets are always close to identity mappings, ourdesign offers less optimization issues caused by depth.Our experiments showed that augmenting plain and residual layers improves performance and fa-cilitates learning in settings with increased depth. In the MNIST dataset, augmented plain networksoutperformed ResNets, suggesting that models with gated shortcut connections – such as HighwayNeural Networks – could be further improved by redesigning the gates.We have shown that applying our technique to ResNets yield a model that can regulate the resid-uals. This model performed better in all our experiments with negligible extra training time and10Under review as a conference paper at ICLR 2017parameters. Lastly, we have shown how it can be used for layer pruning, effectively removing largenumbers of parameters from a network without necessarily harming its performance.REFERENCESY . Bengio, P. Simard, and P Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 1994.Y . Bengio, P. Lamblin, D Popovici, and H Larochelle. Greedy layer-wise training of deep networks.NIPS , 2007.Y . Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives.ArXiv e-prints , June 2012.Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com-parison between shallow and deep architectures. IEEE Transactions on Neural Networks andLearning Systems , 25(8):1553 – 1565, 2014. doi: 10.1109/TNNLS.2013.2293637.Franois Chollet. keras. https://github.com/fchollet/keras , 2015.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , 2011.Timothy Dozat. Incorporating nesterov momentum into adam.R. Eldan and O. Shamir. The Power of Depth for Feedforward Neural Networks. ArXiv e-prints ,December 2015.X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neural networks.AISTATS, , 2010.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu-ral networks. In In Proceedings of the International Conference on Artificial Intelligence andStatistics (AISTATS10). Society for Artificial Intelligence and Statistics , 2010.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InGeoffrey J. Gordon and David B. Dunson (eds.), Proceedings of the Fourteenth InternationalConference on Artificial Intelligence and Statistics (AISTATS-11) , volume 15, pp. 315–323. Jour-nal of Machine Learning Research - Workshop and Conference Proceedings, 2011. URL http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf .K. Greff, R. K. Srivastava, and J. Schmidhuber. Highway and Residual Networks learn UnrolledIterative Estimation. ArXiv e-prints , December 2016.K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-LevelPerformance on ImageNet Classification. ArXiv e-prints , February 2015a.K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. ArXiv e-prints ,December 2015b.K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. ArXiv e-prints ,March 2016.G. Huang, Y . Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep Networks with Stochastic Depth.ArXiv e-prints , March 2016a.J. Huang, Z. Liu, and Q. Weinberger. Densely connected convolutional networks. ArXiv e-prints ,2016b.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. ICML , 2015.D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. ArXiv e-prints , December2014.11Under review as a conference paper at ICLR 2017Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.Hugo Larochelle, Yoshua Bengio, J ́erˆome Louradour, and Pascal Lamblin. Exploring strategies fortraining deep neural networks. J. Mach. Learn. Res. , 10:1–40, June 2009. ISSN 1532-4435. URLhttp://dl.acm.org/citation.cfm?id=1577069.1577070 .G. Larsson, M. Maire, and G. Shakhnarovich. FractalNet: Ultra-Deep Neural Networks withoutResiduals. ArXiv e-prints , May 2016.Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. In Proceedings of the IEEE , pp. 2278–2324, 1998.M. Lin, Q. Chen, and S. Yan. Network In Network. ArXiv e-prints , December 2013.G. Mont ́ufar, R. Pascanu, K. Cho, and Y . Bengio. On the Number of Linear Regions of Deep NeuralNetworks. ArXiv e-prints , February 2014.Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann ma-chines. In Johannes Frnkranz and Thorsten Joachims (eds.), Proceedings of the 27th Inter-national Conference on Machine Learning (ICML-10) , pp. 807–814. Omnipress, 2010. URLhttp://www.icml2010.org/papers/432.pdf .A. Romero, N. Ballas, S. Ebrahimi Kahou, A. Chassang, C. Gatta, and Y . Bengio. FitNets: Hintsfor Thin Deep Nets. ArXiv e-prints , December 2014.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The AllConvolutional Net. ArXiv e-prints , December 2014.Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-nov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Ma-chine Learning Research , 15:1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html .Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Training very deep networks.CoRR , abs/1507.06228, 2015. URL http://arxiv.org/abs/1507.06228 .Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .S. Targ, D. Almeida, and K. Lyman. Resnet in Resnet: Generalizing Residual Architectures. ArXive-prints , March 2016.M. Telgarsky. Benefits of depth in neural networks. ArXiv e-prints , February 2016.B. Xu, R. Huang, and M. Li. Revise Saturated Activation Functions. ArXiv e-prints , February 2016.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR , abs/1605.07146, 2016.URLhttp://arxiv.org/abs/1605.07146 .C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requiresrethinking generalization. ArXiv e-prints , November 2016.12
SkgnrXM4x
Sywh5KYex
ICLR.cc/2017/conference/-/paper113/official/review
{"title": "A simple gating mechanism", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes to learn a single scalar gating parameter instead of a full gating tensor in highway networks. The claim is that such gating is easier to learn and allows a network to flexibly utilize computation.\n\nThe basic idea of the paper is simple and is clearly presented. It is a natural simplification of highway networks to allow easily \"shutting off\" layers while keeping number of additional parameters low. However, in this regard the paper leaves out a few key points. Firstly, it does not mention that the gates in highway networks are data-dependent which is potentially more powerful than learning a fixed gate for all units and independent of data. Secondly, it does not do a fair comparison with highway networks to show that this simpler formulation is indeed easier to learn.\n\nDid the authors try their original design of u = g(k)f(x) + (1 - g(k))x where f(x) is a plain layer instead of a residual layer? Based on the arguments made in the paper, this should work fine. Why wasn't it tested? If it doesn't work, are the arguments incorrect or incomplete?\n\nFor the MNIST experiments, since the hyperparameters are fixed, the plots are misleading if any dependence on hyperparameters exists for the different models. This experiment appears to be based on Srivastava et al (2015). If it is indeed designed to test optimization at aggressive depths, then apart from doing a hyperparameter search, the authors should not use regularization such as dropout or batch norm, which do not appear in the theoretical arguments for the architecture.\n\nFor CIFAR experiments, the obtained improvements compared to the baseline (wide resnets) are very small and therefore it is important to report the standard deviations (or all results) in both cases. It's not clear that the differences are significant.\n\nSome questions regarding g(): Was g() always ReLU? Doesn't this have potential problems with g(k) becoming 0 and never recovering? Does this also mean that for the wide resnet in Fig 7, most residual blocks are zeroed out since k < 0?", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning Identity Mappings with Residual Gates
["Pedro H. P. Savarese", "Leonardo O. Mazza", "Daniel R. Figueiredo"]
We propose a layer augmentation technique that adds shortcut connections with a linear gating mechanism, and can be applied to almost any network model. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Highway Neural Networks and Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. In our experiments, augmented plain networks -- which can be interpreted as simplified Highway Neural Networks -- outperform ResNets, raising new questions on how shortcut connections should be designed. We also evaluate our model on CIFAR-10 and CIFAR-100 using augmented Wide ResNets, achieving 3.65% and 18.27% test error, respectively.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Sywh5KYex
https://openreview.net/pdf?id=Sywh5KYex
https://openreview.net/forum?id=Sywh5KYex&noteId=SkgnrXM4x
Under review as a conference paper at ICLR 2017LEARNING IDENTITY MAPPINGS WITH RESIDUALGATESPedro H. P. SavareseCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazilsavarese@land.ufrj.brLeonardo O. MazzaPoliFederal University of Rio de JaneiroRio de Janeiro, Brazilleonardomazza@poli.ufrj.brDaniel R. FigueiredoCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazildaniel@land.ufrj.brABSTRACTWe propose a layer augmentation technique that adds shortcut connections witha linear gating mechanism, and can be applied to almost any network model. Byusing a scalar parameter to control each gate, we provide a way to learn identitymappings by optimizing only one parameter. We build upon the motivation behindHighway Neural Networks and Residual Networks, where a layer is reformulatedin order to make learning identity mappings less problematic to the optimizer. Theaugmentation introduces only one extra parameter per layer, and provides easieroptimization by making degeneration into identity mappings simpler. Experimen-tal results show that augmenting layers provides better optimization, increasedperformance, and more layer independence. We evaluate our method on MNISTusing fully-connected networks, showing empirical indications that our augmen-tation facilitates the optimization of deep models, and that it provides high toler-ance to full layer removal: the model retains over 90% of its performance evenafter half of its layers have been randomly removed. In our experiments, aug-mented plain networks – which can be interpreted as simplified Highway NeuralNetworks – perform similarly to ResNets, raising new questions on how shortcutconnections should be designed. We also evaluate our model on CIFAR-10 andCIFAR-100 using augmented Wide ResNets, achieving 3:65% and18:27% testerror, respectively.1 I NTRODUCTIONAs the number of layers of neural networks increase, effectively training its parameters becomesa fundamental problem (Larochelle et al. (2009)). Many obstacles challenge the training of neuralnetworks, including vanishing/exploding gradients (Bengio et al. (1994)), saturating activation func-tions (Xu et al. (2016)) and poor weight initialization (Glorot & Bengio (2010)). Techniques such asunsupervised pre-training (Bengio et al. (2007)), non-saturating activation functions (Nair & Hinton(2010)) and normalization (Ioffe & Szegedy (2015)) target these issues and enable the training ofdeeper networks. However, stacking more than a dozen layers still lead to a hard to train model.Recently, models such as Residual Networks (He et al. (2015b)) and Highway Neural Networks(Srivastava et al. (2015)) permitted the design of networks with hundreds of layers. A key idea ofthese models is to allow for information to flow more freely through the layers, by using shortcutconnections between the layer’s input and output. This layer design greatly facilitates training,due to shorter paths between the lower layers and the network’s error function. In particular, thesemodels can more easily learn identity mappings in the layers, thus allowing the network to be deeper1Under review as a conference paper at ICLR 2017and learn more abstract representations (Bengio et al. (2012)). Such networks have been highlysuccessful in many computer vision tasks.On the theoretical side, it is suggested that depth contributes exponentially more to the represen-tational capacity of networks than width (Eldan & Shamir (2015) Telgarsky (2016) Bianchini &Scarselli (2014) Mont ́ufar et al. (2014)). This agrees with the increasing depth of winning architec-tures on challenges such as ImageNet (He et al. (2015b) Szegedy et al. (2014)).Increasing the depth of networks significantly increases its representational capacity and conse-quently its performance, an observation supported by theory (Eldan & Shamir (2015) Telgarsky(2016) Bianchini & Scarselli (2014) Mont ́ufar et al. (2014)) and practice (He et al. (2015b) Szegedyet al. (2014)). Moreover, He et al. (2015b) showed that, by construction, one can increase a net-work’s depth while preserving its performance. These two observations suggest that it suffices tostack more layers to a network in order to increase its performance. However, this behavior is notobserved in practice even with recently proposed models, in part due to the challenge of trainingever deeper networks.In this work we aim to improve the training of deep networks by proposing a layer augmentationthat builds on the idea of using shortcut connections, such as in Residual Networks and HighwayNeural Networks. The key idea is to facilitate the learning of identity mappings by introducing ashortcut connection with a linear gating mechanism , as illustrated in Figure 1. Note that the shortcutconnection is controlled by a gate that is parameterized with a scalar, k. This is a key differencefrom Highway Networks, where a tensor is used to regulate the shortcut connection, along with theincoming data. The idea of using a scalar is simple: it is easier to learn k= 0than to learn Wg= 0for a weight tensor Wgcontrolling the gate. Indeed, this single scalar allows for stronger supervisionon lower layers, by making gradients flow more smoothly in the optimization.x),(Wxfu)(kg1x),(WxfuFigure 1: Gating mechanism applied to the shortcut connection of a layer. The key difference withHighway Networks is that only a scalar kis used to regulate the gates instead of a tensor.We apply our proposed layer re-design to plain and residual layers, with the latter illustrated inFigure 2. Note that when augmenting a residual layer it becomes simply u=g(k)fr(x; W) +x,where frdenotes the layer’s residual function. Thus, the shortcut connection allows the input toflow freely without any interference of g(k)through the layer. In the next sections we will callaugmented plain networks (illustrated in Figure 1) Gated Plain Network and augmented residualnetworks (illustrated in Figure 2) Gated Residual Network, or GResNet. Again, note that in bothcases learning identity mappings is much easier in comparison to the original models.Note that layers that degenerated into identity mappings have no impact in the signal propagatingthrough the network, and thus can be removed without affecting performance. The removal of suchlayers can be seen as a transposed application of sparse encoding (Glorot et al. (2011)): transposingthe sparsity from neurons to layers provides a form to prune them entirely from the network. In-2Under review as a conference paper at ICLR 2017x),(Wxfru)(kgx),(Wxfru)(kg1),(WxfFigure 2: Proposed network design applied to Residual Networks. Note that the joint network designresults in a shortcut path where the input remains unchanged. In this case, g(k)can be interpretedas an amplifier or suppressor for the residual fr(x; W).deed, we show that performance decays slowly in GResNets when layers are removed, even whencompared to ResNets.We evaluate the performance of the proposed design in two experiments. First, we evaluate fully-connected Gated PlainNets and Gated ResNets on MNIST and compare them with their non-augmented counterparts, showing superior performance and robustness to layer removal. Second,we apply our layer re-design to Wide ResNets (Zagoruyko & Komodakis (2016)) and test its perfor-mance on CIFAR, obtaining results that are superior to all previously published results (to the bestof our knowledge). These findings indicate that learning identity mappings is a fundamental aspectof learning in deep networks, and designing models where this is easier seems highly effective.2 A UGMENTATION WITH RESIDUAL GATES2.1 T HEORETICAL INTUITIONRecall that a network’s depth can always be increased without affecting its performance – it sufficesto add layers that perform identity mappings. Consider a plain fully-connected ReLU network withlayers defined as u=ReLU (hx; Wi). When adding a new layer, if we initialize Wto the identitymatrix I, we have:u=ReLU (hx; Ii) =ReLU (x) =xThe last step holds since xis an output of a previous ReLU layer, and ReLU (ReLU (x)) =ReLU (x). Thus, adding more layers should only improve performance. However, how can a net-work with more layers learn to yield performance superior than a network with less layers? A keyobservation is that if learning identity mapping is easy, then the network with more layers is morelikely to yield superior performance, as it can more easily recover the performance of a smallernetwork through identity mappings.Figure 3: A network can have layers added to it without losing performance. Initially, a network hasmReLU layers with parameters fW1; : : : ; W mg. A new, (m+1)-th layer is added with Wm+1=I.This new layer will perform an identity mapping, therefore the two models are equivalent.The layer design of Highway Neural Networks and Residual Networks allows for deeper models tobe trained due to their shortcut connections. Note that in ResNets the identity mapping is learned3Under review as a conference paper at ICLR 2017when W= 0instead of W=I. Similarly, a Highway layer can degenerate into an identity mappingwhen the gating term T(x; W T)equals zero for all data points. Since learning identity mappingsin Highway Neural Networks strongly depends on the choice of the trasnform function T(and isnon-trivial when Tis the sigmoid function, since T1(0)is not defined) we will focus our analysison ResNets due to their simplicity. Considering a residual layer u=ReLU (hx; Wi) +x, we have:u=ReLU (hx;0i) +x=ReLU (0) + x=xIntuitively, residual layers can degenerate into identity mappings more effectively since learning anall-zero matrix is easier than learning the identity matrix. To support this argument, consider weightparameters randomly initialized with zero mean. Hence, the point W= 0 is located exactly in thecenter of the probability mass distribution used to initialize the weights.Recent work (Zhang et al. (2016)) suggests that the L2 norm of a critical point is an important factorregarding how easily the optimizer will reach it. More specifically, residual layers can be interpretedas a translation of the parameter set W=ItoW= 0, which is more accessible in the optimizationprocess due to its inferior L2 norm.However, assuming that residual layers can trivially learn the parameter set W= 0 implies ignor-ing the randomness when initializing the weights. We demonstrate this by calculating the expectedcomponent-wise distance between Woand the origin. Here, Wodenotes the weight tensor after ini-tialization and prior to any optimization. Note that the distance between Woand the origin capturesthe effort for a network to learn identity mappings:Eh(Wo0)2i=EhW2oi=V arhWoiNote that the distance is given by the distribution’s variance, and there is no reason to assume it to benegligible. Additionally, the fact that Residual Networks still suffer from optimization issues causedby depth (Huang et al. (2016a)) further supports this claim.Some initialization schemes propose a variance in the order of O(1n)(Glorot & Bengio (2010), Heet al. (2015a)), however this represents the distance for each individual parameter in W. For tensorswithO(n2)parameters, the total distance – either absolute or Euclidean – between Woand theorigin will be in the order of O(n).2.2 R ESIDUAL GATESAs previously mentioned, the key contribution in this work is the proposal of a layer augmentationtechnique where learning a single scalar parameter suffices in order for the layer to degenerate into anidentity mapping, thus making optimization easier for increased depths. As in Highway Networks,we propose the addition of gated shortcut connections. Our gates, however, are parameterized bya single scalar value, being easier to analyze and learn. For layers augmented with our technique,the effort required to learn identity mappings does not depend on any parameter, such as the layerwidth, in sharp contrast to prior models.Our design is as follows: a layer u=f(x; W)becomes u=g(k)f(x; W)+(1g(k))x, where kisa scalar parameter. This design is illustrated in Figure 1. Note that such layer can quickly degenerateby setting g(k)to0. Using the ReLU activation function as g, it suffices that k0forg(k) = 0 .By adding an extra parameter, the dimensionality of the cost surface also grows by one. This newdimension, however, can be easily understood due to the specific nature of the layer reformulation.The original surface is maintained on the k= 1slice, since the gated model becomes equivalent tothe original one. On the k= 0 slice we have an identity mapping, and the associated cost for allpoints in such slice is the same cost associated with the point fk= 1; W=Ig: this follows sinceboth parameter configurations correspond to identity mappings, therefore being equivalent. Lastly,due to the linear nature of g(k)and consequently of the gates, all other slices k6= 0; k6= 1will be alinear combination between the slices k= 0andk= 1.In addition to augmenting plain layers, we also apply our technique to residual layers. Althoughit might sound counterintuitive to add residual gates to a residual layer, we can see in Figure 2that our augmentation provides ResNets means to regulate the residuals, therefore a linear gating4Under review as a conference paper at ICLR 2017mechanism might not only allow deeper models, but could also improve performance. Having theoriginal design of a residual layer as:u=f(x; W) =fr(x; W) +xwhere fr(x; W)is the layer’s residual function – in our case, BN-ReLU-Conv-BN-ReLU-Conv .Our approach changes this layer by adding a linear gate, yielding:u=g(k)f(x; W) + (1g(k))x=g(k)(fr(x; W) +x) + (1g(k))x=g(k)fr(x; W) +xThe resulting layer maintains the shortcut connection unaltered, which according to He et al. (2016)is a desired property when designing residual blocks. As (1g(k))vanishes from the formulation,g(k)stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Notethat this model introduces a single scalar parameter per layer block. This new dimension can beinterpreted as discussed above, except that the slice k= 0 is equivalent tofk= 1; W= 0g, sincean identity mapping is learned when W= 0in ResNets.3 E XPERIMENTSAll models were implemented on Keras (Chollet (2015)) or on Torch (Collobert et al. (2011)), andwere executed on a Geforce GTX 1070. Larger models or more complex datasets, such as theImageNet (Russakovsky et al. (2015)), were not explored due to hardware limitations.3.1 MNISTThe MNIST dataset (Lecun et al. (1998)) is composed of 60;000greyscale images with 2828pixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained four typesof fully-connected models: classical plain networks, ResNets, Gated Plain networks and GatedResNets.The networks consist of a linear layer with 50 neurons, followed by dlayers with 50 neurons each,and lastly a softmax layer for classification. Only the dmiddle layers differ between the four archi-tectures – the first linear layer and the softmax layer are the same in all experiments.For plain networks, each layer performs dot product, followed by Batch Normalization and a ReLUactivation function.Initial tests with pre-activations (He et al. (2016)) resulted in poor performance on the validationset, therefore we opted for the traditional Dot-BN-ReLU layer when designing Residual Networks.Each residual block consists of two layers, as conventional.All networks were trained using Adam (Kingma & Ba (2014)) with Nesterov momentum (Dozat)for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we keptthe learning rate and momentum fixed to 0:002and0:9during the whole training.For preprocessing, we divided each pixel value by 255, normalizing their values to [0;1].The training curves for plain networks, Gated PlainNets, ResNets and Gated ResNets with varyingdepth are shown in Figure 4. The distance between the curves increase with the depth, showing thatthe augmentation helps the training of deeper models.Table 1 shows the test error for each depth and architecture. Augmented models perform better inall settings when compared to the original ones, and the performance boost is more noticeable withincreased depths. Interestingly, Gated PlainNets performed better than ResNets, suggesting that thereason for Highway Neural Networks to underperform ResNets might be due to an overly complexgating mechanism.5Under review as a conference paper at ICLR 2017Figure 4: Train loss for plain and residual networks, along with their augmented counterparts, withd=f2;10;20;50;100g. As the models get deeper, the error reduction due to the augmentationincreases.Depth = d+ 2 Plain ResNet Gated PlainNet Gated ResNetd= 2 2.29 2.20 2.04 2.17d= 10 2.22 1.64 1.78 1.60d= 20 2.21 1.61 1.59 1.57d= 50 60.37 1.62 1.36 1.48d= 100 90.20 1.50 1.29 1.26Table 1: Test error (%) on the MNIST dataset for fully-connected networks. Augmented modelsoutperform their original counterparts in all experiments. Non-augmented plain networks performworse and fail to converge for d= 50 andd= 100 .Depth = d+ 2 Gated PlainNet Gated ResNetd= 2 10.57 5.58d= 10 1.19 2.54d= 20 0.64 1.73d= 50 0.46 1.04d= 100 0.41 0.67Table 2: Mean kfor increasingly deep Gated PlainNets and Gated ResNets.As observed in Table 2, the mean values of kdecrease as the model gets deeper, showing thatshortcut connections have less impact on shallow networks. This agrees with empirical results thatResNets perform better than classical plain networks as the depth increases. Note that the significantdifference between mean values for kin Gated PlainNets and Gated ResNets has an intuitive expla-nation: in order to suppress the residual signal against the shortcut connection, Gated PlainNetsrequire that k <0:5(otherwise the residual signal will be enhanced). Conversely, Gated ResNetssuppress the residual signal when k <1:0, and enhance it otherwise.We also analyzed how layer removal affects ResNets and Gated ResNets. We compared how thedeepest networks ( d= 100 ) behave as residual blocks composed of 2 layers are completely removedfrom the models. The final values for each kparameter, according to its corresponding residualblock, is shown in Figure 5. We can observe that layers close to the middle of the network have asmaller kthan these in the beginning or the end. Therefore, the middle layers have less importanceby due to being closer to identity mappings.6Under review as a conference paper at ICLR 2017Figure 5: Left: Values for kaccording to ascending order of residual blocks. The first block,consisted of the first two layers of the network, has index 1, while the last block – right before thesoftmax layer – has index 50. Right: Test accuracy (%) according to the number of removed layers.Gated Residual Networks are more robust to layer removal, and maintain decent results even afterhalf of the layers have been removed.Results are shown in Figure 5. For Gated Residual Networks, we prune pairs of layers followingtwo strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest kare removed first. In the other we remove blocks randomly. We present results using both strategiesfor Gated ResNets, and only random pruning for ResNets since they lack the kparameter.The greedy strategy is slightly better for Gated Residual Networks, showing that the kparameteris indeed a good indicator of a layer’s importance for the model, but that layers tend to assume thesame level of significance. In a fair comparison, where both models are pruned randomly, GatedResNets retain a satisfactory performance even after half of its layers have been removed, whileResNets suffer performance decrease after just a few layers.Therefore augmented models are not only more robust to layer removal, but can have a fair shareof their layers pruned and still perform well. Faster predictions can be generated by using a prunedversion of an original model.3.2 CIFARThe CIFAR datasets (Krizhevsky (2009)) consists of 60;000color images with 3232pixels each.CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100dataset is composed of the same number of images, however with a total of 100 classes.Residual Networks have surpassed state-of-the-art results on CIFAR. We test Gated ResNets, WideGated ResNets (Zagoruyko & Komodakis (2016)) and compare them with their original, non-augmented models.For pre-activation ResNets, as described in He et al. (2016), we follow the original implementationdetails. We set an initial learning rate of 0.1, and decrease it by a factor of 10 after 50% and75% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre-processing consists of mean subtraction. Weight decay of 0.0001 is used for regularization, andBatch Normalization’s momentum is set to 0.9.We follow the implementation from Zagoruyko & Komodakis (2016) for Wide ResNets. The learn-ing rate is initialized as 0.1, and decreases by a factor of 5 after 30%, 60% and 80% epochs. Imagesare mean/std normalized, and a weight decay of 0.0005 is used for regularization. We also apply 0.3dropout (Srivastava et al. (2014)) between convolutions, whenever specified. All other details arethe same as for ResNets.7Under review as a conference paper at ICLR 2017For both architectures we use moderate data augmentation: images are padded with 4 pixels, and wetake random crops of size 3232during training. Additionally, each image is horizontally flippedwith50% probability. We use batch size 128 for all experiments.For all gated networks, we initialize kwith a constant value of 1. One crucial question is whetherweight decay should be applied to the kparameters. We call this ” kdecay”, and also compare GatedResNets and Wide Gated ResNets when it is applied with the same magnitude of the weight decay:0.0001 for Gated ResNet and 0.0005 for Wide Gated ResNet.Model Original Gated Gated ( kdecay)Resnet 5 7.16 6.67 7.04Wide ResNet (4,10) + Dropout 3.89 3.65 3.74Table 3: Test error (%) on the CIFAR-10 dataset, for ResNets, Wide ResNets and their augmentedcounterparts. kdecay is when weight decay is also applied to the kparameters in an augmentednetwork. Results for the original models are as reported in He et al. (2015b) and Zagoruyko &Komodakis (2016).Table 3 shows the test error for two architectures: a ResNet with n= 5, and a Wide ResNet withn= 4,n= 10 . Augmenting each model adds 15 and 12 parameters, respectively. We observe thatkdecay hurts performance in both cases, indicating that they should either remain unregularized orsuffer a more subtle regularization compared to the weight parameters. Due to its direct connectionto layer degeneration, regularizing kresults in enforcing identity mappings, which might harm themodel.Due to the indications that a regularization on the kparameter results in a negative impact on themodel’s performance, we proceed to test other models – having different depths and widening fac-tors – with the goal of evaluating the effectiveness of our proposed augmentation. Tables 4 and 5show that augmented Wide ResNets outperform the original models without changing any hyperpa-rameter, both on CIFAR-10 and CIFAR-100.Model Original GatedWide ResNet (2,4) 5.02 4.66Wide ResNet (4,10) 4.00 3.82Wide ResNet (4,10) + Dropout 3.89 3.65Wide ResNet (8,1) 6.43 6.10Wide ResNet (6,10) + Dropout 3.80 3.63Table 4: Test error (%) on the CIFAR-10 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).Model Original GatedWide ResNet (2,4) 24.03 23.29Wide ResNet (4,10) 19.25 18.89Wide ResNet (4,10) + Dropout 18.85 18.27Wide ResNet (8,1) 29.89 28.20Table 5: Test error (%) on the CIFAR-100 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).As in the previous experiment, in Figure 6 we present the final kvalues for each block, in this case ofthe Wide ResNet (4,10) on CIFAR-10. We can observe that the kvalues follow an intriguing pattern:the lowest values are for the blocks of index 1,5and9, which are exactly the ones that increase thefeature map dimension. This indicates that, in such residual blocks, the convolution performedin the shortcut connection to increase dimension is more important than the residual block itself.Additionally, the peak value for the last residual block suggests that its shortcut connection is oflittle importance, and could as well be fully removed without greatly impacting the model.Figure 7 shows the loss curves for Gated Wide ResNet (4,10) + Dropout, both on CIFAR-10 andCIFAR-100. The optimization behaves similarly to the original model, suggesting that the gates do8Under review as a conference paper at ICLR 2017Figure 6: Values for kaccording to ascending order of residual blocks. The first block, consisted ofthe first two layers of the network, has index 1, while the last block – right before the softmax layer– has index 12.Figure 7: Training and test curves for the Wide ResNet (4,10) with 0.3 dropout, showing error (%)on training and test sets. Dashed lines represent training error, whereas solid lines represent testerror.not have any side effects on the network. The performance gains presented on Table 4 point that,however predictable and extremely simple, our augmentation technique is powerful enough to aidon the optimization of state-of-the-art models.Results of different models on the CIFAR datasets are shown in Table 6. The training and test errorsare presented in Figure 7. To the authors’ knowledge, those are the best results on CIFAR-10 andCIFAR-100 with moderate data augmentation – only random flips and translations.3.3 I NTERPRETATIONGreff et al. (2016) showed how Residual and Highway layers can be interpreted as performingiterative refinements on learned representations. In this view, there is a connection on a layer’slearned parameters and the level of refinement applied on its input: for Highway Neural Networks,T(x)having components close to 1results in a layer that generates completely new representations.As seen before, components close to 0result in an identity mapping, meaning that the representationsare not refined at all.9Under review as a conference paper at ICLR 2017Method Params C10+ C100+Network in Network (Lin et al. (2013)) - 8.81 -FitNet (Romero et al. (2014)) - 8.39 35.04Highway Neural Network (Srivastava et al. (2015)) 2.3M 7.76 32.39All-CNN (Springenberg et al. (2014)) - 7.25 33.71ResNet-110 (He et al. (2015b)) 1.7M 6.61 -ResNet in ResNet (Targ et al. (2016)) 1.7M 5.01 22.90Stochastic Depth (Huang et al. (2016a)) 10.2M 4.91 -ResNet-1001 (He et al. (2016)) 10.2M 4.62 22.71FractalNet (Larsson et al. (2016)) 38.6M 4.60 23.73Wide ResNet (4,10) (Zagoruyko & Komodakis (2016)) 36.5M 3.89 18.85DenseNet (Huang et al. (2016b)) 27.2M 3.74 19.25Wide GatedResNet (4,10) + Dropout 36.5M 3.65 18.27Table 6: Test error (%) on the CIFAR-10 and CIFAR-100 dataset. All results are with standard dataaugmentation (crops and flips).However, the dependency of T(x)on the incoming data makes it difficult to analyze the level of re-finement performed by a layer given its parameters. This is more clearly observed once we considerhow each component of T(x)is a function not only on the parameter set WT, but also on x.In particular, given the mapping performed by a layer, we can estimate how much more abstractits representations are compared to the inputs. For our technique, this estimation can be done byobserving the kparameter of the corresponding layer: in Gated PlainNets, k= 0 corresponds toan identity mapping, and therefore there is no modification on the learned representations. Fork= 1, the shortcut connection is ignored and therefore a jump in the representation’s complexity isobserved.For Gated ResNets, the shortcut connection is never completely ignored in the generation of output.However, we can see that as kgrows to infinity the shortcut connection’s contribution goes to zero,and the learned representation becomes more abstract compared to the layer’s inputs.Table 6 shows how the layers that change the data dimensionality learn more abstract representationscompared to dimensionality-preserving layers, which agrees with Greff et al. (2016). The last layer’skvalue, which is the biggest among the whole model, indicates a severe jump in the abstraction ofits representation, and is intuitive once we see the model as being composed of two main stages: aconvolutional one and a fully-connected one, specific for classification.Finally, Table 2 shows that the abstraction jumps decrease as the model grows deeper and the per-formance increases. This agrees with the idea that depth allows for more refined representations tobe learned. We believe that an extensive analysis on the rate that these measures – depth, abstractionjumps and performance – interact with each other could bring further understanding on the practicalbenefits of depth in networks.4 C ONCLUSIONWe have proposed a novel layer augmentation technique that facilitates the optimization of deepnetworks by making identity mappings easy to learn. Unlike previous models, layers augmented byour technique require optimizing only one parameter to degenerate into identity, and by designingour method such that randomly initialized parameter sets are always close to identity mappings, ourdesign offers less optimization issues caused by depth.Our experiments showed that augmenting plain and residual layers improves performance and fa-cilitates learning in settings with increased depth. In the MNIST dataset, augmented plain networksoutperformed ResNets, suggesting that models with gated shortcut connections – such as HighwayNeural Networks – could be further improved by redesigning the gates.We have shown that applying our technique to ResNets yield a model that can regulate the resid-uals. This model performed better in all our experiments with negligible extra training time and10Under review as a conference paper at ICLR 2017parameters. Lastly, we have shown how it can be used for layer pruning, effectively removing largenumbers of parameters from a network without necessarily harming its performance.REFERENCESY . Bengio, P. Simard, and P Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 1994.Y . Bengio, P. Lamblin, D Popovici, and H Larochelle. Greedy layer-wise training of deep networks.NIPS , 2007.Y . Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives.ArXiv e-prints , June 2012.Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com-parison between shallow and deep architectures. IEEE Transactions on Neural Networks andLearning Systems , 25(8):1553 – 1565, 2014. doi: 10.1109/TNNLS.2013.2293637.Franois Chollet. keras. https://github.com/fchollet/keras , 2015.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , 2011.Timothy Dozat. Incorporating nesterov momentum into adam.R. Eldan and O. Shamir. The Power of Depth for Feedforward Neural Networks. ArXiv e-prints ,December 2015.X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neural networks.AISTATS, , 2010.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu-ral networks. In In Proceedings of the International Conference on Artificial Intelligence andStatistics (AISTATS10). Society for Artificial Intelligence and Statistics , 2010.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InGeoffrey J. Gordon and David B. Dunson (eds.), Proceedings of the Fourteenth InternationalConference on Artificial Intelligence and Statistics (AISTATS-11) , volume 15, pp. 315–323. Jour-nal of Machine Learning Research - Workshop and Conference Proceedings, 2011. URL http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf .K. Greff, R. K. Srivastava, and J. Schmidhuber. Highway and Residual Networks learn UnrolledIterative Estimation. ArXiv e-prints , December 2016.K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-LevelPerformance on ImageNet Classification. ArXiv e-prints , February 2015a.K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. ArXiv e-prints ,December 2015b.K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. ArXiv e-prints ,March 2016.G. Huang, Y . Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep Networks with Stochastic Depth.ArXiv e-prints , March 2016a.J. Huang, Z. Liu, and Q. Weinberger. Densely connected convolutional networks. ArXiv e-prints ,2016b.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. ICML , 2015.D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. ArXiv e-prints , December2014.11Under review as a conference paper at ICLR 2017Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.Hugo Larochelle, Yoshua Bengio, J ́erˆome Louradour, and Pascal Lamblin. Exploring strategies fortraining deep neural networks. J. Mach. Learn. Res. , 10:1–40, June 2009. ISSN 1532-4435. URLhttp://dl.acm.org/citation.cfm?id=1577069.1577070 .G. Larsson, M. Maire, and G. Shakhnarovich. FractalNet: Ultra-Deep Neural Networks withoutResiduals. ArXiv e-prints , May 2016.Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. In Proceedings of the IEEE , pp. 2278–2324, 1998.M. Lin, Q. Chen, and S. Yan. Network In Network. ArXiv e-prints , December 2013.G. Mont ́ufar, R. Pascanu, K. Cho, and Y . Bengio. On the Number of Linear Regions of Deep NeuralNetworks. ArXiv e-prints , February 2014.Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann ma-chines. In Johannes Frnkranz and Thorsten Joachims (eds.), Proceedings of the 27th Inter-national Conference on Machine Learning (ICML-10) , pp. 807–814. Omnipress, 2010. URLhttp://www.icml2010.org/papers/432.pdf .A. Romero, N. Ballas, S. Ebrahimi Kahou, A. Chassang, C. Gatta, and Y . Bengio. FitNets: Hintsfor Thin Deep Nets. ArXiv e-prints , December 2014.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The AllConvolutional Net. ArXiv e-prints , December 2014.Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-nov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Ma-chine Learning Research , 15:1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html .Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Training very deep networks.CoRR , abs/1507.06228, 2015. URL http://arxiv.org/abs/1507.06228 .Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .S. Targ, D. Almeida, and K. Lyman. Resnet in Resnet: Generalizing Residual Architectures. ArXive-prints , March 2016.M. Telgarsky. Benefits of depth in neural networks. ArXiv e-prints , February 2016.B. Xu, R. Huang, and M. Li. Revise Saturated Activation Functions. ArXiv e-prints , February 2016.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR , abs/1605.07146, 2016.URLhttp://arxiv.org/abs/1605.07146 .C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requiresrethinking generalization. ArXiv e-prints , November 2016.12
HJzvjX84e
Sywh5KYex
ICLR.cc/2017/conference/-/paper113/official/review
{"title": "claims not convincing", "rating": "5: Marginally below acceptance threshold", "review": "\nThis paper proposes a network called Gated Residual Networks layer design that adds gating to shortcut connections with a scalar to regulate the gate. The authors claim that this approach will improve the training Residual Networks.\n\nIt seems the authors could get competitive performance on CIFAR-10 to state of art models with only Wide Res Nets. Wide Gated ResNet requires much more parameters than DenseNet (and other Res Net variants) for obtaining a little improvement over Dense Net. More importantly, the authors state that they obtained the best results on CIFAR-10 and CIFAR-100 but the updated version of DenseNet (Huang et al. (2016b)) has new results for a version called DenseNet-BC which outperforms all of the results that authors reported (3.46 for CIFAR-10 and 17.18 for CIFAR-100 with 25.6M parameters, DenseNet-BC still outperforms with 15.3M parameters which is much less that 36.5M). The Res Net variants papers with state of art results report result for Image Net. Therefore the empirical results need also the Image Net to demonstrate that improvement claimed is achieved.\n\nThe proposed trick adopts Highway Neural Networks and Residual Networks with an intuitive motivation. It is not sufficiently novel and the empirical results do not prove sufficient effectiveness of this incremental approach.\n\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning Identity Mappings with Residual Gates
["Pedro H. P. Savarese", "Leonardo O. Mazza", "Daniel R. Figueiredo"]
We propose a layer augmentation technique that adds shortcut connections with a linear gating mechanism, and can be applied to almost any network model. By using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Highway Neural Networks and Residual Networks, where a layer is reformulated in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier optimization by making degeneration into identity mappings simpler. Experimental results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmentation facilitates the optimization of deep models, and that it provides high tolerance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. In our experiments, augmented plain networks -- which can be interpreted as simplified Highway Neural Networks -- outperform ResNets, raising new questions on how shortcut connections should be designed. We also evaluate our model on CIFAR-10 and CIFAR-100 using augmented Wide ResNets, achieving 3.65% and 18.27% test error, respectively.
["Computer vision", "Deep learning", "Optimization"]
https://openreview.net/forum?id=Sywh5KYex
https://openreview.net/pdf?id=Sywh5KYex
https://openreview.net/forum?id=Sywh5KYex&noteId=HJzvjX84e
Under review as a conference paper at ICLR 2017LEARNING IDENTITY MAPPINGS WITH RESIDUALGATESPedro H. P. SavareseCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazilsavarese@land.ufrj.brLeonardo O. MazzaPoliFederal University of Rio de JaneiroRio de Janeiro, Brazilleonardomazza@poli.ufrj.brDaniel R. FigueiredoCOPPE/PESCFederal University of Rio de JaneiroRio de Janeiro, Brazildaniel@land.ufrj.brABSTRACTWe propose a layer augmentation technique that adds shortcut connections witha linear gating mechanism, and can be applied to almost any network model. Byusing a scalar parameter to control each gate, we provide a way to learn identitymappings by optimizing only one parameter. We build upon the motivation behindHighway Neural Networks and Residual Networks, where a layer is reformulatedin order to make learning identity mappings less problematic to the optimizer. Theaugmentation introduces only one extra parameter per layer, and provides easieroptimization by making degeneration into identity mappings simpler. Experimen-tal results show that augmenting layers provides better optimization, increasedperformance, and more layer independence. We evaluate our method on MNISTusing fully-connected networks, showing empirical indications that our augmen-tation facilitates the optimization of deep models, and that it provides high toler-ance to full layer removal: the model retains over 90% of its performance evenafter half of its layers have been randomly removed. In our experiments, aug-mented plain networks – which can be interpreted as simplified Highway NeuralNetworks – perform similarly to ResNets, raising new questions on how shortcutconnections should be designed. We also evaluate our model on CIFAR-10 andCIFAR-100 using augmented Wide ResNets, achieving 3:65% and18:27% testerror, respectively.1 I NTRODUCTIONAs the number of layers of neural networks increase, effectively training its parameters becomesa fundamental problem (Larochelle et al. (2009)). Many obstacles challenge the training of neuralnetworks, including vanishing/exploding gradients (Bengio et al. (1994)), saturating activation func-tions (Xu et al. (2016)) and poor weight initialization (Glorot & Bengio (2010)). Techniques such asunsupervised pre-training (Bengio et al. (2007)), non-saturating activation functions (Nair & Hinton(2010)) and normalization (Ioffe & Szegedy (2015)) target these issues and enable the training ofdeeper networks. However, stacking more than a dozen layers still lead to a hard to train model.Recently, models such as Residual Networks (He et al. (2015b)) and Highway Neural Networks(Srivastava et al. (2015)) permitted the design of networks with hundreds of layers. A key idea ofthese models is to allow for information to flow more freely through the layers, by using shortcutconnections between the layer’s input and output. This layer design greatly facilitates training,due to shorter paths between the lower layers and the network’s error function. In particular, thesemodels can more easily learn identity mappings in the layers, thus allowing the network to be deeper1Under review as a conference paper at ICLR 2017and learn more abstract representations (Bengio et al. (2012)). Such networks have been highlysuccessful in many computer vision tasks.On the theoretical side, it is suggested that depth contributes exponentially more to the represen-tational capacity of networks than width (Eldan & Shamir (2015) Telgarsky (2016) Bianchini &Scarselli (2014) Mont ́ufar et al. (2014)). This agrees with the increasing depth of winning architec-tures on challenges such as ImageNet (He et al. (2015b) Szegedy et al. (2014)).Increasing the depth of networks significantly increases its representational capacity and conse-quently its performance, an observation supported by theory (Eldan & Shamir (2015) Telgarsky(2016) Bianchini & Scarselli (2014) Mont ́ufar et al. (2014)) and practice (He et al. (2015b) Szegedyet al. (2014)). Moreover, He et al. (2015b) showed that, by construction, one can increase a net-work’s depth while preserving its performance. These two observations suggest that it suffices tostack more layers to a network in order to increase its performance. However, this behavior is notobserved in practice even with recently proposed models, in part due to the challenge of trainingever deeper networks.In this work we aim to improve the training of deep networks by proposing a layer augmentationthat builds on the idea of using shortcut connections, such as in Residual Networks and HighwayNeural Networks. The key idea is to facilitate the learning of identity mappings by introducing ashortcut connection with a linear gating mechanism , as illustrated in Figure 1. Note that the shortcutconnection is controlled by a gate that is parameterized with a scalar, k. This is a key differencefrom Highway Networks, where a tensor is used to regulate the shortcut connection, along with theincoming data. The idea of using a scalar is simple: it is easier to learn k= 0than to learn Wg= 0for a weight tensor Wgcontrolling the gate. Indeed, this single scalar allows for stronger supervisionon lower layers, by making gradients flow more smoothly in the optimization.x),(Wxfu)(kg1x),(WxfuFigure 1: Gating mechanism applied to the shortcut connection of a layer. The key difference withHighway Networks is that only a scalar kis used to regulate the gates instead of a tensor.We apply our proposed layer re-design to plain and residual layers, with the latter illustrated inFigure 2. Note that when augmenting a residual layer it becomes simply u=g(k)fr(x; W) +x,where frdenotes the layer’s residual function. Thus, the shortcut connection allows the input toflow freely without any interference of g(k)through the layer. In the next sections we will callaugmented plain networks (illustrated in Figure 1) Gated Plain Network and augmented residualnetworks (illustrated in Figure 2) Gated Residual Network, or GResNet. Again, note that in bothcases learning identity mappings is much easier in comparison to the original models.Note that layers that degenerated into identity mappings have no impact in the signal propagatingthrough the network, and thus can be removed without affecting performance. The removal of suchlayers can be seen as a transposed application of sparse encoding (Glorot et al. (2011)): transposingthe sparsity from neurons to layers provides a form to prune them entirely from the network. In-2Under review as a conference paper at ICLR 2017x),(Wxfru)(kgx),(Wxfru)(kg1),(WxfFigure 2: Proposed network design applied to Residual Networks. Note that the joint network designresults in a shortcut path where the input remains unchanged. In this case, g(k)can be interpretedas an amplifier or suppressor for the residual fr(x; W).deed, we show that performance decays slowly in GResNets when layers are removed, even whencompared to ResNets.We evaluate the performance of the proposed design in two experiments. First, we evaluate fully-connected Gated PlainNets and Gated ResNets on MNIST and compare them with their non-augmented counterparts, showing superior performance and robustness to layer removal. Second,we apply our layer re-design to Wide ResNets (Zagoruyko & Komodakis (2016)) and test its perfor-mance on CIFAR, obtaining results that are superior to all previously published results (to the bestof our knowledge). These findings indicate that learning identity mappings is a fundamental aspectof learning in deep networks, and designing models where this is easier seems highly effective.2 A UGMENTATION WITH RESIDUAL GATES2.1 T HEORETICAL INTUITIONRecall that a network’s depth can always be increased without affecting its performance – it sufficesto add layers that perform identity mappings. Consider a plain fully-connected ReLU network withlayers defined as u=ReLU (hx; Wi). When adding a new layer, if we initialize Wto the identitymatrix I, we have:u=ReLU (hx; Ii) =ReLU (x) =xThe last step holds since xis an output of a previous ReLU layer, and ReLU (ReLU (x)) =ReLU (x). Thus, adding more layers should only improve performance. However, how can a net-work with more layers learn to yield performance superior than a network with less layers? A keyobservation is that if learning identity mapping is easy, then the network with more layers is morelikely to yield superior performance, as it can more easily recover the performance of a smallernetwork through identity mappings.Figure 3: A network can have layers added to it without losing performance. Initially, a network hasmReLU layers with parameters fW1; : : : ; W mg. A new, (m+1)-th layer is added with Wm+1=I.This new layer will perform an identity mapping, therefore the two models are equivalent.The layer design of Highway Neural Networks and Residual Networks allows for deeper models tobe trained due to their shortcut connections. Note that in ResNets the identity mapping is learned3Under review as a conference paper at ICLR 2017when W= 0instead of W=I. Similarly, a Highway layer can degenerate into an identity mappingwhen the gating term T(x; W T)equals zero for all data points. Since learning identity mappingsin Highway Neural Networks strongly depends on the choice of the trasnform function T(and isnon-trivial when Tis the sigmoid function, since T1(0)is not defined) we will focus our analysison ResNets due to their simplicity. Considering a residual layer u=ReLU (hx; Wi) +x, we have:u=ReLU (hx;0i) +x=ReLU (0) + x=xIntuitively, residual layers can degenerate into identity mappings more effectively since learning anall-zero matrix is easier than learning the identity matrix. To support this argument, consider weightparameters randomly initialized with zero mean. Hence, the point W= 0 is located exactly in thecenter of the probability mass distribution used to initialize the weights.Recent work (Zhang et al. (2016)) suggests that the L2 norm of a critical point is an important factorregarding how easily the optimizer will reach it. More specifically, residual layers can be interpretedas a translation of the parameter set W=ItoW= 0, which is more accessible in the optimizationprocess due to its inferior L2 norm.However, assuming that residual layers can trivially learn the parameter set W= 0 implies ignor-ing the randomness when initializing the weights. We demonstrate this by calculating the expectedcomponent-wise distance between Woand the origin. Here, Wodenotes the weight tensor after ini-tialization and prior to any optimization. Note that the distance between Woand the origin capturesthe effort for a network to learn identity mappings:Eh(Wo0)2i=EhW2oi=V arhWoiNote that the distance is given by the distribution’s variance, and there is no reason to assume it to benegligible. Additionally, the fact that Residual Networks still suffer from optimization issues causedby depth (Huang et al. (2016a)) further supports this claim.Some initialization schemes propose a variance in the order of O(1n)(Glorot & Bengio (2010), Heet al. (2015a)), however this represents the distance for each individual parameter in W. For tensorswithO(n2)parameters, the total distance – either absolute or Euclidean – between Woand theorigin will be in the order of O(n).2.2 R ESIDUAL GATESAs previously mentioned, the key contribution in this work is the proposal of a layer augmentationtechnique where learning a single scalar parameter suffices in order for the layer to degenerate into anidentity mapping, thus making optimization easier for increased depths. As in Highway Networks,we propose the addition of gated shortcut connections. Our gates, however, are parameterized bya single scalar value, being easier to analyze and learn. For layers augmented with our technique,the effort required to learn identity mappings does not depend on any parameter, such as the layerwidth, in sharp contrast to prior models.Our design is as follows: a layer u=f(x; W)becomes u=g(k)f(x; W)+(1g(k))x, where kisa scalar parameter. This design is illustrated in Figure 1. Note that such layer can quickly degenerateby setting g(k)to0. Using the ReLU activation function as g, it suffices that k0forg(k) = 0 .By adding an extra parameter, the dimensionality of the cost surface also grows by one. This newdimension, however, can be easily understood due to the specific nature of the layer reformulation.The original surface is maintained on the k= 1slice, since the gated model becomes equivalent tothe original one. On the k= 0 slice we have an identity mapping, and the associated cost for allpoints in such slice is the same cost associated with the point fk= 1; W=Ig: this follows sinceboth parameter configurations correspond to identity mappings, therefore being equivalent. Lastly,due to the linear nature of g(k)and consequently of the gates, all other slices k6= 0; k6= 1will be alinear combination between the slices k= 0andk= 1.In addition to augmenting plain layers, we also apply our technique to residual layers. Althoughit might sound counterintuitive to add residual gates to a residual layer, we can see in Figure 2that our augmentation provides ResNets means to regulate the residuals, therefore a linear gating4Under review as a conference paper at ICLR 2017mechanism might not only allow deeper models, but could also improve performance. Having theoriginal design of a residual layer as:u=f(x; W) =fr(x; W) +xwhere fr(x; W)is the layer’s residual function – in our case, BN-ReLU-Conv-BN-ReLU-Conv .Our approach changes this layer by adding a linear gate, yielding:u=g(k)f(x; W) + (1g(k))x=g(k)(fr(x; W) +x) + (1g(k))x=g(k)fr(x; W) +xThe resulting layer maintains the shortcut connection unaltered, which according to He et al. (2016)is a desired property when designing residual blocks. As (1g(k))vanishes from the formulation,g(k)stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Notethat this model introduces a single scalar parameter per layer block. This new dimension can beinterpreted as discussed above, except that the slice k= 0 is equivalent tofk= 1; W= 0g, sincean identity mapping is learned when W= 0in ResNets.3 E XPERIMENTSAll models were implemented on Keras (Chollet (2015)) or on Torch (Collobert et al. (2011)), andwere executed on a Geforce GTX 1070. Larger models or more complex datasets, such as theImageNet (Russakovsky et al. (2015)), were not explored due to hardware limitations.3.1 MNISTThe MNIST dataset (Lecun et al. (1998)) is composed of 60;000greyscale images with 2828pixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained four typesof fully-connected models: classical plain networks, ResNets, Gated Plain networks and GatedResNets.The networks consist of a linear layer with 50 neurons, followed by dlayers with 50 neurons each,and lastly a softmax layer for classification. Only the dmiddle layers differ between the four archi-tectures – the first linear layer and the softmax layer are the same in all experiments.For plain networks, each layer performs dot product, followed by Batch Normalization and a ReLUactivation function.Initial tests with pre-activations (He et al. (2016)) resulted in poor performance on the validationset, therefore we opted for the traditional Dot-BN-ReLU layer when designing Residual Networks.Each residual block consists of two layers, as conventional.All networks were trained using Adam (Kingma & Ba (2014)) with Nesterov momentum (Dozat)for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we keptthe learning rate and momentum fixed to 0:002and0:9during the whole training.For preprocessing, we divided each pixel value by 255, normalizing their values to [0;1].The training curves for plain networks, Gated PlainNets, ResNets and Gated ResNets with varyingdepth are shown in Figure 4. The distance between the curves increase with the depth, showing thatthe augmentation helps the training of deeper models.Table 1 shows the test error for each depth and architecture. Augmented models perform better inall settings when compared to the original ones, and the performance boost is more noticeable withincreased depths. Interestingly, Gated PlainNets performed better than ResNets, suggesting that thereason for Highway Neural Networks to underperform ResNets might be due to an overly complexgating mechanism.5Under review as a conference paper at ICLR 2017Figure 4: Train loss for plain and residual networks, along with their augmented counterparts, withd=f2;10;20;50;100g. As the models get deeper, the error reduction due to the augmentationincreases.Depth = d+ 2 Plain ResNet Gated PlainNet Gated ResNetd= 2 2.29 2.20 2.04 2.17d= 10 2.22 1.64 1.78 1.60d= 20 2.21 1.61 1.59 1.57d= 50 60.37 1.62 1.36 1.48d= 100 90.20 1.50 1.29 1.26Table 1: Test error (%) on the MNIST dataset for fully-connected networks. Augmented modelsoutperform their original counterparts in all experiments. Non-augmented plain networks performworse and fail to converge for d= 50 andd= 100 .Depth = d+ 2 Gated PlainNet Gated ResNetd= 2 10.57 5.58d= 10 1.19 2.54d= 20 0.64 1.73d= 50 0.46 1.04d= 100 0.41 0.67Table 2: Mean kfor increasingly deep Gated PlainNets and Gated ResNets.As observed in Table 2, the mean values of kdecrease as the model gets deeper, showing thatshortcut connections have less impact on shallow networks. This agrees with empirical results thatResNets perform better than classical plain networks as the depth increases. Note that the significantdifference between mean values for kin Gated PlainNets and Gated ResNets has an intuitive expla-nation: in order to suppress the residual signal against the shortcut connection, Gated PlainNetsrequire that k <0:5(otherwise the residual signal will be enhanced). Conversely, Gated ResNetssuppress the residual signal when k <1:0, and enhance it otherwise.We also analyzed how layer removal affects ResNets and Gated ResNets. We compared how thedeepest networks ( d= 100 ) behave as residual blocks composed of 2 layers are completely removedfrom the models. The final values for each kparameter, according to its corresponding residualblock, is shown in Figure 5. We can observe that layers close to the middle of the network have asmaller kthan these in the beginning or the end. Therefore, the middle layers have less importanceby due to being closer to identity mappings.6Under review as a conference paper at ICLR 2017Figure 5: Left: Values for kaccording to ascending order of residual blocks. The first block,consisted of the first two layers of the network, has index 1, while the last block – right before thesoftmax layer – has index 50. Right: Test accuracy (%) according to the number of removed layers.Gated Residual Networks are more robust to layer removal, and maintain decent results even afterhalf of the layers have been removed.Results are shown in Figure 5. For Gated Residual Networks, we prune pairs of layers followingtwo strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest kare removed first. In the other we remove blocks randomly. We present results using both strategiesfor Gated ResNets, and only random pruning for ResNets since they lack the kparameter.The greedy strategy is slightly better for Gated Residual Networks, showing that the kparameteris indeed a good indicator of a layer’s importance for the model, but that layers tend to assume thesame level of significance. In a fair comparison, where both models are pruned randomly, GatedResNets retain a satisfactory performance even after half of its layers have been removed, whileResNets suffer performance decrease after just a few layers.Therefore augmented models are not only more robust to layer removal, but can have a fair shareof their layers pruned and still perform well. Faster predictions can be generated by using a prunedversion of an original model.3.2 CIFARThe CIFAR datasets (Krizhevsky (2009)) consists of 60;000color images with 3232pixels each.CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100dataset is composed of the same number of images, however with a total of 100 classes.Residual Networks have surpassed state-of-the-art results on CIFAR. We test Gated ResNets, WideGated ResNets (Zagoruyko & Komodakis (2016)) and compare them with their original, non-augmented models.For pre-activation ResNets, as described in He et al. (2016), we follow the original implementationdetails. We set an initial learning rate of 0.1, and decrease it by a factor of 10 after 50% and75% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre-processing consists of mean subtraction. Weight decay of 0.0001 is used for regularization, andBatch Normalization’s momentum is set to 0.9.We follow the implementation from Zagoruyko & Komodakis (2016) for Wide ResNets. The learn-ing rate is initialized as 0.1, and decreases by a factor of 5 after 30%, 60% and 80% epochs. Imagesare mean/std normalized, and a weight decay of 0.0005 is used for regularization. We also apply 0.3dropout (Srivastava et al. (2014)) between convolutions, whenever specified. All other details arethe same as for ResNets.7Under review as a conference paper at ICLR 2017For both architectures we use moderate data augmentation: images are padded with 4 pixels, and wetake random crops of size 3232during training. Additionally, each image is horizontally flippedwith50% probability. We use batch size 128 for all experiments.For all gated networks, we initialize kwith a constant value of 1. One crucial question is whetherweight decay should be applied to the kparameters. We call this ” kdecay”, and also compare GatedResNets and Wide Gated ResNets when it is applied with the same magnitude of the weight decay:0.0001 for Gated ResNet and 0.0005 for Wide Gated ResNet.Model Original Gated Gated ( kdecay)Resnet 5 7.16 6.67 7.04Wide ResNet (4,10) + Dropout 3.89 3.65 3.74Table 3: Test error (%) on the CIFAR-10 dataset, for ResNets, Wide ResNets and their augmentedcounterparts. kdecay is when weight decay is also applied to the kparameters in an augmentednetwork. Results for the original models are as reported in He et al. (2015b) and Zagoruyko &Komodakis (2016).Table 3 shows the test error for two architectures: a ResNet with n= 5, and a Wide ResNet withn= 4,n= 10 . Augmenting each model adds 15 and 12 parameters, respectively. We observe thatkdecay hurts performance in both cases, indicating that they should either remain unregularized orsuffer a more subtle regularization compared to the weight parameters. Due to its direct connectionto layer degeneration, regularizing kresults in enforcing identity mappings, which might harm themodel.Due to the indications that a regularization on the kparameter results in a negative impact on themodel’s performance, we proceed to test other models – having different depths and widening fac-tors – with the goal of evaluating the effectiveness of our proposed augmentation. Tables 4 and 5show that augmented Wide ResNets outperform the original models without changing any hyperpa-rameter, both on CIFAR-10 and CIFAR-100.Model Original GatedWide ResNet (2,4) 5.02 4.66Wide ResNet (4,10) 4.00 3.82Wide ResNet (4,10) + Dropout 3.89 3.65Wide ResNet (8,1) 6.43 6.10Wide ResNet (6,10) + Dropout 3.80 3.63Table 4: Test error (%) on the CIFAR-10 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).Model Original GatedWide ResNet (2,4) 24.03 23.29Wide ResNet (4,10) 19.25 18.89Wide ResNet (4,10) + Dropout 18.85 18.27Wide ResNet (8,1) 29.89 28.20Table 5: Test error (%) on the CIFAR-100 dataset, for Wide ResNets and their augmented counter-parts. Results for non-gated Wide ResNets are from Zagoruyko & Komodakis (2016).As in the previous experiment, in Figure 6 we present the final kvalues for each block, in this case ofthe Wide ResNet (4,10) on CIFAR-10. We can observe that the kvalues follow an intriguing pattern:the lowest values are for the blocks of index 1,5and9, which are exactly the ones that increase thefeature map dimension. This indicates that, in such residual blocks, the convolution performedin the shortcut connection to increase dimension is more important than the residual block itself.Additionally, the peak value for the last residual block suggests that its shortcut connection is oflittle importance, and could as well be fully removed without greatly impacting the model.Figure 7 shows the loss curves for Gated Wide ResNet (4,10) + Dropout, both on CIFAR-10 andCIFAR-100. The optimization behaves similarly to the original model, suggesting that the gates do8Under review as a conference paper at ICLR 2017Figure 6: Values for kaccording to ascending order of residual blocks. The first block, consisted ofthe first two layers of the network, has index 1, while the last block – right before the softmax layer– has index 12.Figure 7: Training and test curves for the Wide ResNet (4,10) with 0.3 dropout, showing error (%)on training and test sets. Dashed lines represent training error, whereas solid lines represent testerror.not have any side effects on the network. The performance gains presented on Table 4 point that,however predictable and extremely simple, our augmentation technique is powerful enough to aidon the optimization of state-of-the-art models.Results of different models on the CIFAR datasets are shown in Table 6. The training and test errorsare presented in Figure 7. To the authors’ knowledge, those are the best results on CIFAR-10 andCIFAR-100 with moderate data augmentation – only random flips and translations.3.3 I NTERPRETATIONGreff et al. (2016) showed how Residual and Highway layers can be interpreted as performingiterative refinements on learned representations. In this view, there is a connection on a layer’slearned parameters and the level of refinement applied on its input: for Highway Neural Networks,T(x)having components close to 1results in a layer that generates completely new representations.As seen before, components close to 0result in an identity mapping, meaning that the representationsare not refined at all.9Under review as a conference paper at ICLR 2017Method Params C10+ C100+Network in Network (Lin et al. (2013)) - 8.81 -FitNet (Romero et al. (2014)) - 8.39 35.04Highway Neural Network (Srivastava et al. (2015)) 2.3M 7.76 32.39All-CNN (Springenberg et al. (2014)) - 7.25 33.71ResNet-110 (He et al. (2015b)) 1.7M 6.61 -ResNet in ResNet (Targ et al. (2016)) 1.7M 5.01 22.90Stochastic Depth (Huang et al. (2016a)) 10.2M 4.91 -ResNet-1001 (He et al. (2016)) 10.2M 4.62 22.71FractalNet (Larsson et al. (2016)) 38.6M 4.60 23.73Wide ResNet (4,10) (Zagoruyko & Komodakis (2016)) 36.5M 3.89 18.85DenseNet (Huang et al. (2016b)) 27.2M 3.74 19.25Wide GatedResNet (4,10) + Dropout 36.5M 3.65 18.27Table 6: Test error (%) on the CIFAR-10 and CIFAR-100 dataset. All results are with standard dataaugmentation (crops and flips).However, the dependency of T(x)on the incoming data makes it difficult to analyze the level of re-finement performed by a layer given its parameters. This is more clearly observed once we considerhow each component of T(x)is a function not only on the parameter set WT, but also on x.In particular, given the mapping performed by a layer, we can estimate how much more abstractits representations are compared to the inputs. For our technique, this estimation can be done byobserving the kparameter of the corresponding layer: in Gated PlainNets, k= 0 corresponds toan identity mapping, and therefore there is no modification on the learned representations. Fork= 1, the shortcut connection is ignored and therefore a jump in the representation’s complexity isobserved.For Gated ResNets, the shortcut connection is never completely ignored in the generation of output.However, we can see that as kgrows to infinity the shortcut connection’s contribution goes to zero,and the learned representation becomes more abstract compared to the layer’s inputs.Table 6 shows how the layers that change the data dimensionality learn more abstract representationscompared to dimensionality-preserving layers, which agrees with Greff et al. (2016). The last layer’skvalue, which is the biggest among the whole model, indicates a severe jump in the abstraction ofits representation, and is intuitive once we see the model as being composed of two main stages: aconvolutional one and a fully-connected one, specific for classification.Finally, Table 2 shows that the abstraction jumps decrease as the model grows deeper and the per-formance increases. This agrees with the idea that depth allows for more refined representations tobe learned. We believe that an extensive analysis on the rate that these measures – depth, abstractionjumps and performance – interact with each other could bring further understanding on the practicalbenefits of depth in networks.4 C ONCLUSIONWe have proposed a novel layer augmentation technique that facilitates the optimization of deepnetworks by making identity mappings easy to learn. Unlike previous models, layers augmented byour technique require optimizing only one parameter to degenerate into identity, and by designingour method such that randomly initialized parameter sets are always close to identity mappings, ourdesign offers less optimization issues caused by depth.Our experiments showed that augmenting plain and residual layers improves performance and fa-cilitates learning in settings with increased depth. In the MNIST dataset, augmented plain networksoutperformed ResNets, suggesting that models with gated shortcut connections – such as HighwayNeural Networks – could be further improved by redesigning the gates.We have shown that applying our technique to ResNets yield a model that can regulate the resid-uals. This model performed better in all our experiments with negligible extra training time and10Under review as a conference paper at ICLR 2017parameters. Lastly, we have shown how it can be used for layer pruning, effectively removing largenumbers of parameters from a network without necessarily harming its performance.REFERENCESY . Bengio, P. Simard, and P Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 1994.Y . Bengio, P. Lamblin, D Popovici, and H Larochelle. Greedy layer-wise training of deep networks.NIPS , 2007.Y . Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives.ArXiv e-prints , June 2012.Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com-parison between shallow and deep architectures. IEEE Transactions on Neural Networks andLearning Systems , 25(8):1553 – 1565, 2014. doi: 10.1109/TNNLS.2013.2293637.Franois Chollet. keras. https://github.com/fchollet/keras , 2015.Ronan Collobert, Koray Kavukcuoglu, and Cl ́ement Farabet. Torch7: A matlab-like environmentfor machine learning. In BigLearn, NIPS Workshop , 2011.Timothy Dozat. Incorporating nesterov momentum into adam.R. Eldan and O. Shamir. The Power of Depth for Feedforward Neural Networks. ArXiv e-prints ,December 2015.X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neural networks.AISTATS, , 2010.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu-ral networks. In In Proceedings of the International Conference on Artificial Intelligence andStatistics (AISTATS10). Society for Artificial Intelligence and Statistics , 2010.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InGeoffrey J. Gordon and David B. Dunson (eds.), Proceedings of the Fourteenth InternationalConference on Artificial Intelligence and Statistics (AISTATS-11) , volume 15, pp. 315–323. Jour-nal of Machine Learning Research - Workshop and Conference Proceedings, 2011. URL http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf .K. Greff, R. K. Srivastava, and J. Schmidhuber. Highway and Residual Networks learn UnrolledIterative Estimation. ArXiv e-prints , December 2016.K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-LevelPerformance on ImageNet Classification. ArXiv e-prints , February 2015a.K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. ArXiv e-prints ,December 2015b.K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. ArXiv e-prints ,March 2016.G. Huang, Y . Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep Networks with Stochastic Depth.ArXiv e-prints , March 2016a.J. Huang, Z. Liu, and Q. Weinberger. Densely connected convolutional networks. ArXiv e-prints ,2016b.S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. ICML , 2015.D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. ArXiv e-prints , December2014.11Under review as a conference paper at ICLR 2017Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.Hugo Larochelle, Yoshua Bengio, J ́erˆome Louradour, and Pascal Lamblin. Exploring strategies fortraining deep neural networks. J. Mach. Learn. Res. , 10:1–40, June 2009. ISSN 1532-4435. URLhttp://dl.acm.org/citation.cfm?id=1577069.1577070 .G. Larsson, M. Maire, and G. Shakhnarovich. FractalNet: Ultra-Deep Neural Networks withoutResiduals. ArXiv e-prints , May 2016.Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. In Proceedings of the IEEE , pp. 2278–2324, 1998.M. Lin, Q. Chen, and S. Yan. Network In Network. ArXiv e-prints , December 2013.G. Mont ́ufar, R. Pascanu, K. Cho, and Y . Bengio. On the Number of Linear Regions of Deep NeuralNetworks. ArXiv e-prints , February 2014.Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann ma-chines. In Johannes Frnkranz and Thorsten Joachims (eds.), Proceedings of the 27th Inter-national Conference on Machine Learning (ICML-10) , pp. 807–814. Omnipress, 2010. URLhttp://www.icml2010.org/papers/432.pdf .A. Romero, N. Ballas, S. Ebrahimi Kahou, A. Chassang, C. Gatta, and Y . Bengio. FitNets: Hintsfor Thin Deep Nets. ArXiv e-prints , December 2014.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The AllConvolutional Net. ArXiv e-prints , December 2014.Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-nov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Ma-chine Learning Research , 15:1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html .Rupesh Kumar Srivastava, Klaus Greff, and J ̈urgen Schmidhuber. Training very deep networks.CoRR , abs/1507.06228, 2015. URL http://arxiv.org/abs/1507.06228 .Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov,Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.CoRR , abs/1409.4842, 2014. URL http://arxiv.org/abs/1409.4842 .S. Targ, D. Almeida, and K. Lyman. Resnet in Resnet: Generalizing Residual Architectures. ArXive-prints , March 2016.M. Telgarsky. Benefits of depth in neural networks. ArXiv e-prints , February 2016.B. Xu, R. Huang, and M. Li. Revise Saturated Activation Functions. ArXiv e-prints , February 2016.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR , abs/1605.07146, 2016.URLhttp://arxiv.org/abs/1605.07146 .C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requiresrethinking generalization. ArXiv e-prints , November 2016.12
S1EQUMMEe
HyTqHL5xg
ICLR.cc/2017/conference/-/paper296/official/review
{"title": "interesting way of learning nonlinear state space models", "rating": "7: Good paper, accept", "review": "This paper presents a variational inference based method for learning nonlinear dynamical systems. Unlike the deep Kalman filter, the proposed method learns a state space model, which forces the latent state to maintain all of the information relevant to predictions, rather than leaving it implicit in the observations. Experiments show the proposed method is better able to learn meaningful representations of sequence data.\n\nThe proposed DVBF is well motivated, and for the most part the presentation is clear. The experiments show interesting results on illustrative toy examples. I think the contribution is interesting and potentially useful, so I\u2019d recommend acceptance.\n\nThe SVAE method of Johnson et al. (2016) deserves more discussion than the two sentences devoted to it, since the method seems pretty closely related. Like the DVBF, the SVAE imposes a Markovianity assumption, and it is able to handle similar kinds of problems. From what I understand, the most important algorithmic difference is that the SVAE q network predicts potentials, whereas the DVBF q network predicts innovations. What are the tradeoffs between the two? Section 2.2 says they do the latter in the interest of solving control-related tasks, but I\u2019m not clear why this follows. \n\nIs there a reason SVAEs don\u2019t meet all the desiderata mentioned at the end of the Introduction?\n\nSince the SVAE code is publicly available, one could probably compare against it in the experiments. \n\nI\u2019m a bit confused about the role of uncertainty about v. In principle, one could estimate the transition parameters by maximum likelihood (i.e. fitting a point estimate of v), but this isn\u2019t what\u2019s done. Instead, v is integrated out as part of the marginal likelihood, which I interpret as giving the flexibility to model different dynamics for different sequences. But if this is the case, then shouldn\u2019t the q distribution for v depend on the data, rather than being data-independent as in Eqn. (9)?\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
["Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick van der Smagt"]
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HyTqHL5xg
https://openreview.net/pdf?id=HyTqHL5xg
https://openreview.net/forum?id=HyTqHL5xg&noteId=S1EQUMMEe
Published as a conference paper at ICLR 2017DEEPVARIATIONAL BAYES FILTERS : UNSUPERVISEDLEARNING OF STATE SPACE MODELS FROM RAWDATAMaximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der SmagtData Lab, V olkswagen Group, 80805, München, Germanyzip([maximilian.karl, maximilian.soelch], [@volkswagen.de])ABSTRACTWe introduce Deep Variational Bayes Filters (DVBF), a new method for unsuper-vised learning and identification of latent Markovian state space models. Leverag-ing recent advances in Stochastic Gradient Variational Bayes, DVBF can overcomeintractable inference distributions via variational inference. Thus, it can handlehighly nonlinear input data with temporal and spatial dependencies such as imagesequences without domain knowledge. Our experiments show that enabling back-propagation through transitions enforces state space assumptions and significantlyimproves information content of the latent embedding. This also enables realisticlong-term prediction.1 I NTRODUCTIONEstimating probabilistic models for sequential data is central to many domains, such as audio, naturallanguage or physical plants, Graves (2013); Watter et al. (2015); Chung et al. (2015); Deisenroth &Rasmussen (2011); Ko & Fox (2011). The goal is to obtain a model p(x1:T)that best reflects a dataset of observed sequences x1:T. Recent advances in deep learning have paved the way to powerfulmodels capable of representing high-dimensional sequences with temporal dependencies, e.g., Graves(2013); Watter et al. (2015); Chung et al. (2015); Bayer & Osendorfer (2014).Time series for dynamic systems have been studied extensively in systems theory, cf. McGoff et al.(2015) and sources therein. In particular, state space models have shown to be a powerful tool toanalyze and control the dynamics. Two tasks remain a significant challenge to this day: Can weidentify the governing system from data only? And can we perform inference from observables to thelatent system variables? These two tasks are competing: A more powerful representation of systemrequires more computationally demanding inference, and efficient inference, such as the well-knownKalman filters, Kalman & Bucy (1961), can prohibit sufficiently complex system classes.Leveraging a recently proposed estimator based on variational inference, stochastic gradient varia-tional Bayes (SGVB, Kingma & Welling (2013); Rezende et al. (2014)), approximate inference oflatent variables becomes tractable. Extensions to time series have been shown in Bayer & Osendorfer(2014); Chung et al. (2015). Empirically, they showed considerable improvements in marginal datalikelihood, i.e., compression, but lack full-information latent states, which prohibits, e.g., long-termsampling. Yet, in a wide range of applications, full-information latent states should be valued overcompression. This is crucial if the latent spaces are used in downstream applications.Our contribution is, to our knowledge, the first model that (i) enforces the latent state-space modelassumptions, allowing for reliable system identification, and plausible long-term prediction of theobservable system, (ii) provides the corresponding inference mechanism with rich dependencies,(iii) inherits the merit of neural architectures to be trainable on raw data such as images or othersensory inputs, and (iv) scales to large data due to optimization of parameters based on stochasticgradient descent, Bottou (2010). Hence, our model has the potential to exploit systems theorymethodology for downstream tasks, e.g., control or model-based reinforcement learning, Sutton(1996).1Published as a conference paper at ICLR 20172 B ACKGROUND AND RELATED WORK2.1 P ROBABILISTIC MODELING AND FILTERING OF DYNAMICAL SYSTEMSWe consider non-linear dynamical systems with observations xt2X Rnx, depending on controlinputs (oractions )ut2U Rnu. Elements ofXcan be high-dimensional sensory data, e.g., rawimages. In particular they may exhibit complex non-Markovian transitions. Corresponding time-discrete sequences of length T are denoted as x1:T= (x1;x2;:::;xT)andu1:T= (u1;u2;:::;uT).We are interested in a probabilistic model1p(x1:Tju1:T). Formally, we assume the graphical modelp(x1:Tju1:T) =Zp(x1:Tjz1:T;u1:T)p(z1:Tju1:T) dz1:T; (1)where z1:T;zt2Z Rnz;denotes the corresponding latent sequence. That is, we assume a gener-ative model with an underlying latent dynamical system with emission model p(x1:Tjz1:T;u1:T)andtransition model p(z1:Tju1:T). We want to learn both components, i.e., we want to performlatent system identification . In order to be able to apply the identified system in downstream tasks, weneed to find efficient posterior inference distributions p(z1:Tjx1:T). Three common examples areprediction, filtering, and smoothing: inference of ztfromx1:t1,x1:t, orx1:T, respectively. Accurateidentification and efficient inference are generally competing tasks, as a wider generative model classtypically leads to more difficult or even intractable inference.The transition model is imperative for achieving good long-term results: a bad transition model canlead to divergence of the latent state. Accordingly, we put special emphasis on it through a Bayesiantreatment. Assuming that the transitions may differ for each time step, we impose a regularizing priordistribution on a set of transition parameters 1:T:(1)=ZZp(x1:Tjz1:T;u1:T)p(z1:Tj1:T;u1:T)p(1:T) d1:Tdz1:T (2)To obtain state-space models, we impose assumptions on emission and state transition model,p(x1:Tjz1:T;u1:T) =TYt=1p(xtjzt); (3)p(z1:Tj1:T;u1:T) =T1Yt=0p(zt+1jzt;ut;t): (4)Equations (3) and (4) assume that the current state ztcontains all necessary information about thecurrent observation xt, as well as the next state zt+1(given the current control input utand transitionparameters t). That is, in contrast to observations, ztexhibits Markovian behavior.A typical example of these assumptions are Linear Gaussian Models (LGMs), i.e., both state transitionand emission model are affine transformations with Gaussian offset noise,zt+1=Ftzt+Btut+wt wtN(0;Qt); (5)xt=Htzt+yt ytN(0;Rt): (6)Typically, state transition matrix Ftandcontrol-input matrix Btare assumed to be given, so thatt=wt. Section 3.3 will show that our approach allows other variants such as t= (Ft;Bt;wt).Under the strong assumptions (5)and(6)of LGMs, inference is provably solved optimally by thewell-known Kalman filters. While extensions of Kalman filters to nonlinear dynamical systems exist,Julier & Uhlmann (1997), and are successfully applied in many areas, they suffer from two majordrawbacks: firstly, its assumptions are restrictive and are violated in practical applications, leading tosuboptimal results. Secondly, parameters such as FtandBthave to be known in order to performposterior inference. There have been efforts to learn such system dynamics, cf. Ghahramani & Hinton(1996); Honkela et al. (2010) based on the expectation maximization (EM) algorithm or Valpola &Karhunen (2002), which uses neural networks. However, these algorithms are not applicable in cases1Throughout this paper, we consider u1:Tas given. The case without any control inputs can be recovered bysetting U=;, i.e., not conditioning on control inputs.2Published as a conference paper at ICLR 2017where the true posterior distribution is intractable. This is the case if, e.g., image sequences are used,since the posterior is then highly nonlinear—typical mean-field assumptions on the approximateposterior are too simplified. Our new approach will tackle both issues, and moreover learn bothidentification and inference jointly by exploiting Stochastic Gradient Variational Bayes.2.2 S TOCHASTIC GRADIENT VARIATIONAL BAYES (SGVB) FOR TIMESERIESDISTRIBUTIONSReplacing the bottleneck layer of a deterministic auto-encoder with stochastic units z, the variationalauto-encoder (V AE, Kingma & Welling (2013); Rezende et al. (2014)) learns complex marginal datadistributions on xin an unsupervised fashion from simpler distributions via the graphical modelp(x) =Zp(x;z) dz=Zp(xjz)p(z) dz:In V AEs,p(xjz)p(xjz)is typically parametrized by a neural network with parameters .Within this framework, models are trained by maximizing a lower bound to the marginal datalog-likelihood via stochastic gradients:lnp(x)Eq(zjx)[lnp(xjz)]KL(q(zjx)jjp(z)) =:LSGVB (x;;) (7)This is provably equivalent to minimizing the KL-divergence between the approximate posterior orrecognition model q(zjx)and the true, but usually intractable posterior distribution p(zjx).qisparametrized by a neural network with parameters .The principle of V AEs has been transferred to time series, Bayer & Osendorfer (2014); Chung et al.(2015). Both employ nonlinear state transitions in latent space, but violate eq. (4): Observationsare directly included in the transition process. Empirically, reconstruction and compression workwell. The state space Z, however, does not reflect all information available, which prohibits plausiblegenerative long-term prediction. Such phenomena with generative models have been explained inTheis et al. (2015).In Krishnan et al. (2015), the state-space assumptions (3)and(4)are softly encoded in the DeepKalman Filter (DKF) model. Despite that, experiments, cf. section 4, show that their model fails toextract information such as velocity (and in general time derivatives), which leads to similar problemswith prediction.Johnson et al. (2016) give an algorithm for general graphical model variational inference, not tailoredto dynamical systems. In contrast to previously discussed methods, it does not violate eq. (4). Theapproaches differ in that the recognition model outputs node potentials in combination with messagepassing to infer the latent state. Our approach focuses on learning dynamical systems for control-related tasks and therefore uses a neural network for inferring the latent state directly instead of aninference subroutine.Others have been specifically interested in applying variational inference for controlled dynamicalsystems. In Watter et al. (2015) (Embed to Control—E2C), a V AE is used to learn the mappingsto and from latent space. The regularization is clearly motivated by eq. (7). Still, it fails to bea mathematically correct lower bound to the marginal data likelihood. More significantly, theirrecognition model requires all observations that contain information w.r.t. the current state. This isnothing short of an additional temporal i.i.d. assumption on data: Multiple raw samples need to bestacked into one training sample such that all latent factors (in particular all time derivatives) arepresent within one sample. The task is thus greatly simplified, because instead of time-series, welearn a static auto-encoder on the processed data.A pattern emerges: good prediction should boost compression. Still, previous methods empiricallyexcel at compression, while prediction will not work. We conjecture that this is caused by previousmethods trying to fit the latent dynamics to a latent state that is beneficial for reconstruction . Thisencourages learning of a stationary auto-encoder with focus of extracting as much from a singleobservation as possible. Importantly, it is not necessary to know the entire sequence for excellentreconstruction of single time steps. Once the latent states are set, it is hard to adjust the transition tothem. This would require changing the latent states slightly, and that comes at a cost of decreasingthe reconstruction (temporarily). The learning algorithm is stuck in a local optimum with goodreconstruction and hence good compression only. Intriguingly, E2C bypasses this problem with itsdata augmentation.3Published as a conference paper at ICLR 2017zt+1xt+1 wt vtutztt(a) Forward graphicalmodel.zt+1xt+1 wt vtutztt(b) Inference.Figure 1: Left: Graphical model for one transition under state-space model assumptions. The updatedlatent state zt+1depends on the previous state zt, control input ut, and transition parameters t.zt+1contains all information for generating observation xt+1. Diamond nodes indicate a deterministicdependency on parent nodes. Right: Inference performed during training (or while filtering). Pastobservations are indirectly used for inference as ztcontains all information about them.This leads to a key contribution of this paper: We force the latent space to fit the transition —reversingthe direction, and thus achieving the state-space model assumptions and full information in the latentstates.3 D EEPVARIATIONAL BAYES FILTERS3.1 R EPARAMETRIZING THE TRANSITIONThe central problem for learning latent states system dynamics is efficient inference of a latent spacethat obeys state-space model assumptions . If the latter are fulfilled, the latent space must contain allinformation. Previous approaches emphasized good reconstruction, so that the space only containsinformation necessary for reconstruction of one time step. To overcome this, we establish gradientpaths through transitions over time so that the transition becomes the driving factor for shaping thelatent space, rather than adjusting the transition to the recognition model’s latent space. The key is toprevent the recognition model q(z1:Tjx1:T)from directly drawing the latent state zt.Similar to the reparametrization trick from Kingma & Welling (2013); Rezende et al. (2014) for mak-ing the Monte Carlo estimate differentiable w.r.t. the parameters, we make the transition differentiablew.r.t. the last state and its parameters:zt+1=f(zt;ut;t) (8)Given the stochastic parameters t, the state transition is deterministic (which in turn means that bymarginalizing t, we still have a stochastic transition). The immediate and crucial consequence isthat errors in reconstruction of xtfromztare backpropagated directly through time.This reparametrization has a couple of other important implications: the recognition model nolonger infers latent states zt, but transition parameters t. In particular, the gradient @zt+1=@ztiswell-defined from (8)—gradient information can be backpropagated through the transition.This is different from the method used in Krishnan et al. (2015), where the transition only occurs inthe KL-divergence term of their loss function (a variant of eq. (7)). No gradient from the generativemodel is backpropagated through the transitions.Much like in eq. (5), the stochastic parameters includes a corrective offset term wt, which emphasizesthe notion of the recognition model as a filter. In theory, the learning algorithm could still learn thetransition as zt+1=wt. However, the introduction of talso enables us to regularize the transitionwith meaningful priors, which not only prevents overfitting the recognition model, but also enforcesmeaningful manifolds in the latent space via transition priors . Ignoring the potential of the transitionover time yields large penalties from these priors. Thus, the problems outlined in Section 2 areovercome by construction.To install such transition priors, we split t= (wt;vt). The interpretation of wtis a sample-specificprocess noise which can be inferred from incoming data, like in eq. (5). On the other hand, vt4Published as a conference paper at ICLR 2017q(wtj)the input/conditional is task-dependentq(vt)tq(t) =q(wtj)q(vt)transition in latent state spacezt+1=f(zt;ut;t)zt zt+1utp(xt+1jzt+1)(a) General scheme for arbitrary transitions.zt utvtwttt=f (zt;ut)(e.g., neural network)(A;B;C)t=PMi=1(i)t(A;B;C)(i)zt+1=Atzt+Btut+Ctwtzt+1(b) One particular example of a latent transition: locallinearity.Figure 2: Left: General architecture for DVBF. Stochastic transition parameters tare inferredvia the recognition model, e.g., a neural network. Based on a sampled t, the state transition iscomputed deterministically. The updated latent state zt+1is used for predicting xt+1. For details, seesection 3.1. Right: Zoom into latent space transition (red box in left figure). One exemplary transitionis shown, the locally linear transition from section 3.3.are universal transition parameters, which are sample-independent (and are only inferred from dataduring training). This corresponds to the idea of weight uncertainty in Hinton & Van Camp (1993).This interpretation leads to a natural factorization assumption on the recognition model:q(1:Tjx1:T) =q(w1:Tjx1:T)q(v1:T) (9)When using the fully trained model for generative sampling, i.e., sampling without input, the universalstate transition parameters can still be drawn from q(v1:T), whereas w1:Tis drawn from the prior inthe absence of input data.Figure 1 shows the underlying graphical model and the inference procedure. Figure 2a shows ageneric view on our new computational architecture. An example of a locally linear transitionparametrization will be given in section 3.3.3.2 T HELOWER BOUND OBJECTIVE FUNCTIONIn analogy to eq. (7), we now derive a lower bound to the marginal likelihood p(x1:Tju1:T). Afterreflecting the Markov assumptions (3) and (4) in the factorized likelihood (2), we have:p(x1:Tju1:T) =ZZp(1:T)TYt=1p(xtjzt)T1Yt=0p(zt+1jzt;ut;t) d1:Tdz1:TDue to the deterministic transition given t+1, the last term is a product of Dirac distributions andthe overall distribution simplifies greatly:p(x1:Tju1:T) =Zp(1:T)TYt=1p(xtjzt)zt=f(zt1;ut1;t1)d1:T=Zp(1:T)p(x1:Tjz1:T) d1:T5Published as a conference paper at ICLR 2017The last formulation is for notational brevity: the term p(x1:Tjz1:T)isnotindependent of 1:Tandu1:T. We now derive the objective function, a lower bound to the data likelihood:lnp(x1:Tju1:T) = lnZp(1:T)p(x1:Tjz1:T)q(1:Tjx1:T;u1:T)q(1:Tjx1:T;u1:T)d1:TZq(1:Tjx1:T;u1:T) lnp(x1:Tjz1:T)p(1:T)q(1:Tjx1:T;u1:T)d1:T=Eq[lnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) + lnp(1:T)] (10)=Eq[lnp(x1:Tjz1:T)]KL(q(1:Tjx1:T;u1:T)jjp(1:T)) (11)=:LDVBF (x1:T;;ju1:T)Our experiments show that an annealed version of (10) is beneficial to the overall performance:(100) =Eq[cilnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) +cilnp(w1:T) + lnp(v1:T)]Here,ci= max(1;0:01 +i=TA)is an inverse temperature that increases linearly in the number ofgradient updates iuntil reaching 1 after TAannealing iterations. Similar annealing schedules havebeen applied in, e.g., Ghahramani & Hinton (2000); Mandt et al. (2016); Rezende & Mohamed (2015),where it is shown that they smooth the typically highly non-convex error landscape. Additionally, thetransition prior p(v1:T)was estimated during optimization, i.e., through an empirical Bayes approach.In all experiments, we used isotropic Gaussian priors.3.3 E XAMPLE : LOCALLY LINEAR TRANSITIONSWe have derived a learning algorithm for time series with particular focus on general transitions inlatent space. Inspired by Watter et al. (2015), this section will show how to learn a particular instance:locally linear state transitions. That is, we set eq. (8) tozt+1=Atzt+Btut+Ctwt; t = 1;:::;T; (12)where wtis a stochastic sample from the recognition model and At;Bt;andCtare matrices ofmatching dimensions. They are stochastic functions of ztandut(thus local linearity). We drawvt=nA(i)t;B(i)t;C(i)tji= 1;:::;Mo;fromq(vt), i.e.,Mtriplets of matrices, each corresponding to data- independent , but learned globallylinear system. These can be learned as point estimates. We employed a Bayesian treatment as inBlundell et al. (2015). We yield At;Bt;andCtas state- and control- dependent linear combinations:At=MXi=1(i)tA(i)tt=f (zt;ut)2RMBt=MXi=1(i)tB(i)tCt=MXi=1(i)tC(i)tThe computation is depicted in fig. 2b. The function f can be, e.g., a (deterministic) neural networkwith weights . As a subset of the generative parameters , is part of the trainable parameters ofour model. The weight vector tis shared between the three matrices. There is a correspondence toeq. (5): AtandFt,BtandBt, as well as CtC>tandQtare related.We used this parametrization of the state transition model for our experiments. It is important that theparametrization is up to the user and the respective application.4 E XPERIMENTS AND RESULTSIn this section we validate that DVBF with locally linear transitions (DVBF-LL) (section 3.3)outperforms Deep Kalman Filters (DKF, Krishnan et al. (2015)) in recovering latent spaces withfull information.2We focus on environments that can be simulated with full knowledge of the2We do not include E2C, Watter et al. (2015), due to the need for data modification and its inability toprovide a correct lower bound as mentioned in section 2.2.6Published as a conference paper at ICLR 2017(a) DVBF-LL (b) DKFFigure 3: (a) Our DVBF-LL model trained on pendulum image sequences. The upper plots show thelatent space with coloring according to the ground truth with angles on the left and angular velocitieson the right. The lower plots show regression results for predicting ground truth from the latentrepresentation. The latent space plots show clearly that all information for representing the fullstate of a pendulum is encoded in each latent state. (b) DKF from Krishnan et al. (2015) trainedon the same pendulum dataset. The latent space plot shows that DKF fails to learn velocities of thependulum. It is therefore not able to capture all information for representing the full pendulum state.ground truth latent dynamical system. The experimental setup is described in the SupplementaryMaterial. We published the code for DVBF and a link will be made available at https://brml.org/projects/dvbf .4.1 D YNAMIC PENDULUMIn order to test our algorithm on truly non-Markovian observations of a dynamical system, wesimulated a dynamic torque-controlled pendulum governed by the differential equationml2'(t) =_'(t) +mglsin'(t) +u(t);m=l= 1;= 0:5;g= 9:81, via numerical integration, and then converted the ground-truth angle'into an image observation in X. The one-dimensional control corresponds to angle acceleration(which is proportional to joint torque). Angle and angular velocity fully describe the system.Figure 3 shows the latent spaces for identical input data learned by DVBF-LL and DKF, respectively,colored with the ground truth in the top row. It should be noted that latent samples are shown, notmeans of posterior distributions. The state-space model was allowed to use three latent dimensions.As we can see in fig. 3a, DVBF-LL learned a two-dimensional manifold embedding, i.e., it encodedthe angle in polar coordinates (thus circumventing the discontinuity of angles modulo 2). Thebottom row shows ordinary least-squares regressions (OLS) underlining the performance: there existsa high correlation between latent states and ground-truth angle and angular velocity for DVBF-LL.On the contrary, fig. 3b verifies our prediction that DKF is equally capable of learning the angle, butextracts little to no information on angular velocity.The OLS regression results shown in table 1 validate this observation.3Predicting sin(')andcos('),i.e., polar coordinates of the ground-truth angle ', works almost equally well for DVBF-LL and DKF,with DVBF-LL slightly outperforming DKF. For predicting the ground truth velocity _', DVBF-LL3Linear regression is a natural choice: after transforming the ground truth to polar coordinates, an affinetransformation should be a good fit for predicting ground truth from latent states. We also tried nonlinearregression with vanilla neural networks. While not being shown here, the results underlined the same conclusion.7Published as a conference paper at ICLR 2017Table 1: Results for pendulum OLS regressions of all latent states on respective dependent variable.Dependentground truthvariableDVBF-LL DKFLog-Likelihood R2Log-Likelihood R2sin(') 3990.8 0.961 1737.6 0.929cos(') 7231.1 0.982 6614.2 0.979_'11139 0.916 20289 0.035(a) Generative latent walk. (b) Reconstructive latent walk..........5 1 10 15 20 40 45(c) Ground truth (top), reconstructions (middle), generative samples (bottom) from identical initial latent state.Figure 4: (a) Latent space walk in generative mode. (b) Latent space walk in filtering mode.(c) Ground truth and samples from recognition and generative model. The reconstruction samplinghas access to observation sequence and performs filtering. The generative samples only get access tothe observations once for creating the initial state while all subsequent samples are predicted fromthis single initial state. The red bar indicates the length of training sequences. Samples beyond showthe generalization capabilities for sequences longer than during training. The complete sequence canbe found in the Appendix in fig. 7.shows remarkable performance. DKF, instead, contains hardly any information, resulting in a verylow goodness-of-fit score of R2= 0:035.Figure 4 shows that the strong relation between ground truth and latent state is beneficial for generativesampling. All plots show 100 time steps of a pendulum starting from the exact same latent state andnot being actuated. The top row plots show a purely generative walk in the latent space on the left,and a walk in latent space that is corrected by filtering observations on the right. We can see thatboth follow a similar trajectory to an attractor. The generative model is more prone to noise whenapproaching the attractor.The bottom plot shows the first 45 steps of the corresponding observations (top row), reconstructions(middle row), and generative samples (without correcting from observations). Interestingly, DVBFworks very well even though the sequence is much longer than all training sequences (indicated bythe red line).Table (2)shows values of the lower bound to the marginal data likelihood (for DVBF-LL, thiscorresponds to eq. (11)). We see that DVBF-LL outperforms DKF in terms of compression, but only8Published as a conference paper at ICLR 2017Table 2: Average test set objective function values for pendulum experiment.Lower Bound = Reconstruction Error KL divergenceDVBF-LL 798.56 802.06 3.50DKF 784.70 788.58 3.88(a) Latent walk of bouncing ball. (b) Latent space velocities.Figure 5: (a) Two dimensions of 4D bouncing ball latent space. Ground truth x and y coordinates arecombined into a regular 3 3 checkerboard coloring. This checkerboard is correctly extracted by theembedding. (b) Remaining two latent dimensions. Same latent samples, colored with ball velocitiesin x and y direction (left and right image, respectively). The smooth, perpendicular coloring indicatesthat the ground truth value is stored in the latent dimension.with a slight margin, which does not reflect the better generative sampling as Theis et al. (2015)argue.4.2 B OUNCING BALLThe bouncing ball experiment features a ball rolling within a bounding box in a plane. The systemhas a two-dimensional control input, added to the directed velocity of the ball. If the ball hits the wall,it bounces off, so that the true dynamics are highly dependent on the current position and velocity ofthe ball. The system’s state is four-dimensional, two dimensions each for position and velocity.Consequently, we use a DVBF-LL with four latent dimensions. Figure 5 shows that DVBF againcaptures the entire system dynamics in the latent space. The checkerboard is quite a remarkableresult: the ground truth position of the ball lies within the 2D unit square, the bounding box. Inorder to visualize how ground truth reappears in the learned latent states, we show the warping of theground truth bounding box into the latent space. To this end, we partitioned (discretized) the groundtruth unit square into a regular 3x3 checkerboard with respective coloring. We observed that DVBFlearned to extract the 2D position from the 256 pixels, and aligned them in two dimensions of thelatent space in strong correspondence to the physical system. The algorithm does the exact samepixel-to-2D inference that a human observer automatically does when looking at the image..........5 1 10 15 20 40 45Figure 6: Ground truth (top), reconstructions (middle), generative samples (bottom) from identicalinitial latent state for the two bouncing balls experiment. Red bar indicates length of trainingsequences.9Published as a conference paper at ICLR 20174.3 T WOBOUNCING BALLSAnother more complex environment4features two balls in a bounding box. We used a 10-dimensionallatent space to fully capture the position and velocity information of the balls. Reconstruction andgenerative samples are shown in fig. 6. Same as in the pendulum example we get a generative modelwith stable predictions beyond training data sequence length.5 C ONCLUSIONWe have proposed Deep Variational Bayes Filters (DVBF), a new method to learn state space modelsfrom raw non-Markovian sequence data. DVBFs perform latent dynamic system identification, andsubsequently overcome intractable inference. As DVBFs make use of stochastic gradient variationalBayes they naturally scale to large data sets. In a series of vision-based experiments we demonstratedthat latent states can be recovered which identify the underlying physical quantities. The generativemodel showed stable long-term predictions far beyond the sequence length used during training.ACKNOWLEDGEMENTSPart of this work was conducted at Chair of Robotics and Embedded Systems, Department ofInformatics, Technische Universität München, Germany, and supported by the TACMAN project, ECGrant agreement no. 610967, within the FP7 framework programme.We would like to thank Jost Tobias Springenberg, Adam Kosiorek, Moritz Münst, and anonymousreviewers for valuable input.REFERENCESJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprintarXiv:1411.7610 , 2014.Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty inneural networks. arXiv preprint arXiv:1505.05424 , 2015.Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings ofCOMPSTAT’2010 , pp. 177–186. Springer, 2010.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and YoshuaBengio. A recurrent latent variable model for sequential data. CoRR , abs/1506.02216, 2015. URLhttp://arxiv.org/abs/1506.02216 .Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policysearch. In Proceedings of the 28th International Conference on machine learning (ICML-11) , pp.465–472, 2011.Zoubin Ghahramani and Geoffrey E Hinton. Parameter estimation for linear dynamical systems.Technical report, Technical Report CRG-TR-96-2, University of Toronto, Dept. of ComputerScience, 1996.Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models.Neural computation , 12(4):831–864, 2000.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 ,2013.Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing thedescription length of the weights. In Proceedings of the sixth annual conference on Computationallearning theory , pp. 5–13. ACM, 1993.4We used the script attached to Sutskever & Hinton (2007) for generating our datasets.10Published as a conference paper at ICLR 2017Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, and Juha Karhunen. Approximateriemannian conjugate gradient learning for fixed-form variational bayes. Journal of MachineLearning Research , 11(Nov):3235–3268, 2010.Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams.Structured V AEs: Composing probabilistic graphical models and variational autoencoders. arXivpreprint arXiv:1603.06277 , 2016.Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. InAeroSense’97 , pp. 182–193. International Society for Optics and Photonics, 1997.Rudolph E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journalof basic engineering , 83(1):95–108, 1961.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Jonathan Ko and Dieter Fox. Learning gp-bayesfilters via gaussian process latent variable models.Autonomous Robots , 30(1):3–23, 2011.Rahul G Krishnan, Uri Shalit, and David Sontag. Deep Kalman filters. arXiv preprintarXiv:1511.05121 , 2015.Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and David Blei. Variationaltempering. In Proceedings of the 19th International Conference on Artificial Intelligence andStatistics , pp. 704–712, 2016.Kevin McGoff, Sayan Mukherjee, Natesh Pillai, et al. Statistical inference for dynamical systems: Areview. Statistics Surveys , 9:209–252, 2015.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-mate inference in deep generative models. In Tony Jebara and Eric P. Xing (eds.), Proceedingsof the 31st International Conference on Machine Learning (ICML-14) , pp. 1278–1286. JMLRWorkshop and Conference Proceedings, 2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.pdf .Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ilya Sutskever and Geoffrey E. Hinton. Learning multilevel distributed representations forhigh-dimensional sequences. In Marina Meila and Xiaotong Shen (eds.), Proceedings ofthe Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS-07) ,volume 2, pp. 548–555. Journal of Machine Learning Research - Proceedings Track, 2007.URL http://jmlr.csail.mit.edu/proceedings/papers/v2/sutskever07a/sutskever07a.pdf .Leonid Kuvayev Rich Sutton. Model-based reinforcement learning with an approximate, learnedmodel. In Proceedings of the ninth Yale workshop on adaptive and learning systems , pp. 101–105,1996.Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844 , 2015.Harri Valpola and Juha Karhunen. An unsupervised ensemble learning method for nonlinear dynamicstate-space models. Neural computation , 14(11):2647–2692, 2002.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Advances in NeuralInformation Processing Systems , pp. 2728–2736, 2015.11Published as a conference paper at ICLR 2017A S UPPLEMENTARY TO LOWER BOUNDA.1 A NNEALED KL-D IVERGENCEWe used the analytical solution of the annealed KL-divergence in eq. (10) for optimization:Eq[lnq(w1:Tjx1:T;u1:T) +cilnp(w1:T)] =ci12ln(22p)12ln(22q) +ci2q+ (qp)222p12B S UPPLEMENTARY TO IMPLEMENTATIONB.1 E XPERIMENTAL SETUPIn all our experiments, we use sequences of 15 raw images of the respective system with 16 16pixels each, i.e., observation space XR256, as well as control inputs of varying dimension andinterpretation depending on the experiment. We used training, validation and test sets with 500sequences each. Control input sequences were drawn randomly (“motor babbling”). Additionaldetails about the implementation can be found in the published code at https://brml.org/projects/dvbf .B.2 A DDITIONAL EXPERIMENT PLOTSFigure 7: Ground truth and samples from recognition and generative model. Complete version offig. 4 with all missing samples present.B.3 I MPLEMENTATION DETAILS FOR DVBF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 6 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 3 identity outputInitial Transition z1(w1): 128 ReLU + 3 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 50012Published as a conference paper at ICLR 2017B.4 I MPLEMENTATION DETAILS FOR DVBF INBOUNCING BALL EXPERIMENTInput: 15 timesteps of 162observation dimensions and 2 action dimensionLatent Space: 4 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 8 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 4 identity outputInitial Transition z1(w1): 128 ReLU + 4 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 500B.5 I MPLEMENTATION DETAILS FOR DVBF INTWOBOUNCING BALLS EXPERIMENTInput: 15 timesteps of 202observation dimensions and 2000 samplesLatent Space: 10 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 202sigmoid outputRecognition Model: 128 ReLU + 20 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 64 softmax outputInitial Network w1p(x1:T): MLP with: 128 ReLU + 10 identity outputInitial Transition z1(w1): 128 ReLU + 10 identity outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every gradient update, TA= 2 105iterationsBatch-size: 80B.6 I MPLEMENTATION DETAILS FOR DKF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);(zt)): 128 Sigmoid + 128 Sigmoid + 2 162identity outputRecognition Model: Fast Dropout BiRNN 128 Sigmoid + 128 Sigmoid + 3 identity outputTransition Network p(ztjzt1;ut1): 128 Sigmoid + 128 Sigmoid + 6 outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every 25th gradient update, TA= 2000 iterationsBatch-size: 50013
ByLhqXN4l
HyTqHL5xg
ICLR.cc/2017/conference/-/paper296/official/review
{"title": "Rather incremental, but interesting experiments", "rating": "6: Marginally above acceptance threshold", "review": "This is mainly a (well-written) toy application paper. It explains SGVB can be applied to state-space models. The main idea is to cast a state-space model as a deterministic temporal transformation, with innovation variables acting as latent variables. The prior over the innovation variables is not a function of time. Approximate inference is performed over these innovation variables, rather the states. This is a solution to a fairly specific problem (e.g. it doesn't discuss how priors over the beta's can depend on the past), but an interesting application nonetheless. The ideas could have been explained more compactly and more clearly; the paper dives into specifics fairly quickly, which seems a missed opportunity.\n\nMy compliments for the amount of detail put in the paper and appendix.\n\nThe experiments are on toy examples, but show promise.\n\n- Section 2.1: \u201cIn our notation, one would typically set beta_t = w_t, though other variants are possible\u201d -> It\u2019s probably better to clarify that if F_t and B_t and not in beta_t, they are not given a Bayesian treatment (but e.g. merely optimized).\n\n- Section 2.2 last paragraph: \u201cA key contribution is [\u2026] forcing the latent space to fit the transition\u201d. This seems rather trivial to achieve.\n\n- Eq 9: \u201cThis interpretation implies the factorization of the recognition model:..\u201d\nThe factorization is not implied anywhere: i.e. you could in principle use q(beta|x) = q(w|x,v)q(v)", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
["Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick van der Smagt"]
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HyTqHL5xg
https://openreview.net/pdf?id=HyTqHL5xg
https://openreview.net/forum?id=HyTqHL5xg&noteId=ByLhqXN4l
Published as a conference paper at ICLR 2017DEEPVARIATIONAL BAYES FILTERS : UNSUPERVISEDLEARNING OF STATE SPACE MODELS FROM RAWDATAMaximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der SmagtData Lab, V olkswagen Group, 80805, München, Germanyzip([maximilian.karl, maximilian.soelch], [@volkswagen.de])ABSTRACTWe introduce Deep Variational Bayes Filters (DVBF), a new method for unsuper-vised learning and identification of latent Markovian state space models. Leverag-ing recent advances in Stochastic Gradient Variational Bayes, DVBF can overcomeintractable inference distributions via variational inference. Thus, it can handlehighly nonlinear input data with temporal and spatial dependencies such as imagesequences without domain knowledge. Our experiments show that enabling back-propagation through transitions enforces state space assumptions and significantlyimproves information content of the latent embedding. This also enables realisticlong-term prediction.1 I NTRODUCTIONEstimating probabilistic models for sequential data is central to many domains, such as audio, naturallanguage or physical plants, Graves (2013); Watter et al. (2015); Chung et al. (2015); Deisenroth &Rasmussen (2011); Ko & Fox (2011). The goal is to obtain a model p(x1:T)that best reflects a dataset of observed sequences x1:T. Recent advances in deep learning have paved the way to powerfulmodels capable of representing high-dimensional sequences with temporal dependencies, e.g., Graves(2013); Watter et al. (2015); Chung et al. (2015); Bayer & Osendorfer (2014).Time series for dynamic systems have been studied extensively in systems theory, cf. McGoff et al.(2015) and sources therein. In particular, state space models have shown to be a powerful tool toanalyze and control the dynamics. Two tasks remain a significant challenge to this day: Can weidentify the governing system from data only? And can we perform inference from observables to thelatent system variables? These two tasks are competing: A more powerful representation of systemrequires more computationally demanding inference, and efficient inference, such as the well-knownKalman filters, Kalman & Bucy (1961), can prohibit sufficiently complex system classes.Leveraging a recently proposed estimator based on variational inference, stochastic gradient varia-tional Bayes (SGVB, Kingma & Welling (2013); Rezende et al. (2014)), approximate inference oflatent variables becomes tractable. Extensions to time series have been shown in Bayer & Osendorfer(2014); Chung et al. (2015). Empirically, they showed considerable improvements in marginal datalikelihood, i.e., compression, but lack full-information latent states, which prohibits, e.g., long-termsampling. Yet, in a wide range of applications, full-information latent states should be valued overcompression. This is crucial if the latent spaces are used in downstream applications.Our contribution is, to our knowledge, the first model that (i) enforces the latent state-space modelassumptions, allowing for reliable system identification, and plausible long-term prediction of theobservable system, (ii) provides the corresponding inference mechanism with rich dependencies,(iii) inherits the merit of neural architectures to be trainable on raw data such as images or othersensory inputs, and (iv) scales to large data due to optimization of parameters based on stochasticgradient descent, Bottou (2010). Hence, our model has the potential to exploit systems theorymethodology for downstream tasks, e.g., control or model-based reinforcement learning, Sutton(1996).1Published as a conference paper at ICLR 20172 B ACKGROUND AND RELATED WORK2.1 P ROBABILISTIC MODELING AND FILTERING OF DYNAMICAL SYSTEMSWe consider non-linear dynamical systems with observations xt2X Rnx, depending on controlinputs (oractions )ut2U Rnu. Elements ofXcan be high-dimensional sensory data, e.g., rawimages. In particular they may exhibit complex non-Markovian transitions. Corresponding time-discrete sequences of length T are denoted as x1:T= (x1;x2;:::;xT)andu1:T= (u1;u2;:::;uT).We are interested in a probabilistic model1p(x1:Tju1:T). Formally, we assume the graphical modelp(x1:Tju1:T) =Zp(x1:Tjz1:T;u1:T)p(z1:Tju1:T) dz1:T; (1)where z1:T;zt2Z Rnz;denotes the corresponding latent sequence. That is, we assume a gener-ative model with an underlying latent dynamical system with emission model p(x1:Tjz1:T;u1:T)andtransition model p(z1:Tju1:T). We want to learn both components, i.e., we want to performlatent system identification . In order to be able to apply the identified system in downstream tasks, weneed to find efficient posterior inference distributions p(z1:Tjx1:T). Three common examples areprediction, filtering, and smoothing: inference of ztfromx1:t1,x1:t, orx1:T, respectively. Accurateidentification and efficient inference are generally competing tasks, as a wider generative model classtypically leads to more difficult or even intractable inference.The transition model is imperative for achieving good long-term results: a bad transition model canlead to divergence of the latent state. Accordingly, we put special emphasis on it through a Bayesiantreatment. Assuming that the transitions may differ for each time step, we impose a regularizing priordistribution on a set of transition parameters 1:T:(1)=ZZp(x1:Tjz1:T;u1:T)p(z1:Tj1:T;u1:T)p(1:T) d1:Tdz1:T (2)To obtain state-space models, we impose assumptions on emission and state transition model,p(x1:Tjz1:T;u1:T) =TYt=1p(xtjzt); (3)p(z1:Tj1:T;u1:T) =T1Yt=0p(zt+1jzt;ut;t): (4)Equations (3) and (4) assume that the current state ztcontains all necessary information about thecurrent observation xt, as well as the next state zt+1(given the current control input utand transitionparameters t). That is, in contrast to observations, ztexhibits Markovian behavior.A typical example of these assumptions are Linear Gaussian Models (LGMs), i.e., both state transitionand emission model are affine transformations with Gaussian offset noise,zt+1=Ftzt+Btut+wt wtN(0;Qt); (5)xt=Htzt+yt ytN(0;Rt): (6)Typically, state transition matrix Ftandcontrol-input matrix Btare assumed to be given, so thatt=wt. Section 3.3 will show that our approach allows other variants such as t= (Ft;Bt;wt).Under the strong assumptions (5)and(6)of LGMs, inference is provably solved optimally by thewell-known Kalman filters. While extensions of Kalman filters to nonlinear dynamical systems exist,Julier & Uhlmann (1997), and are successfully applied in many areas, they suffer from two majordrawbacks: firstly, its assumptions are restrictive and are violated in practical applications, leading tosuboptimal results. Secondly, parameters such as FtandBthave to be known in order to performposterior inference. There have been efforts to learn such system dynamics, cf. Ghahramani & Hinton(1996); Honkela et al. (2010) based on the expectation maximization (EM) algorithm or Valpola &Karhunen (2002), which uses neural networks. However, these algorithms are not applicable in cases1Throughout this paper, we consider u1:Tas given. The case without any control inputs can be recovered bysetting U=;, i.e., not conditioning on control inputs.2Published as a conference paper at ICLR 2017where the true posterior distribution is intractable. This is the case if, e.g., image sequences are used,since the posterior is then highly nonlinear—typical mean-field assumptions on the approximateposterior are too simplified. Our new approach will tackle both issues, and moreover learn bothidentification and inference jointly by exploiting Stochastic Gradient Variational Bayes.2.2 S TOCHASTIC GRADIENT VARIATIONAL BAYES (SGVB) FOR TIMESERIESDISTRIBUTIONSReplacing the bottleneck layer of a deterministic auto-encoder with stochastic units z, the variationalauto-encoder (V AE, Kingma & Welling (2013); Rezende et al. (2014)) learns complex marginal datadistributions on xin an unsupervised fashion from simpler distributions via the graphical modelp(x) =Zp(x;z) dz=Zp(xjz)p(z) dz:In V AEs,p(xjz)p(xjz)is typically parametrized by a neural network with parameters .Within this framework, models are trained by maximizing a lower bound to the marginal datalog-likelihood via stochastic gradients:lnp(x)Eq(zjx)[lnp(xjz)]KL(q(zjx)jjp(z)) =:LSGVB (x;;) (7)This is provably equivalent to minimizing the KL-divergence between the approximate posterior orrecognition model q(zjx)and the true, but usually intractable posterior distribution p(zjx).qisparametrized by a neural network with parameters .The principle of V AEs has been transferred to time series, Bayer & Osendorfer (2014); Chung et al.(2015). Both employ nonlinear state transitions in latent space, but violate eq. (4): Observationsare directly included in the transition process. Empirically, reconstruction and compression workwell. The state space Z, however, does not reflect all information available, which prohibits plausiblegenerative long-term prediction. Such phenomena with generative models have been explained inTheis et al. (2015).In Krishnan et al. (2015), the state-space assumptions (3)and(4)are softly encoded in the DeepKalman Filter (DKF) model. Despite that, experiments, cf. section 4, show that their model fails toextract information such as velocity (and in general time derivatives), which leads to similar problemswith prediction.Johnson et al. (2016) give an algorithm for general graphical model variational inference, not tailoredto dynamical systems. In contrast to previously discussed methods, it does not violate eq. (4). Theapproaches differ in that the recognition model outputs node potentials in combination with messagepassing to infer the latent state. Our approach focuses on learning dynamical systems for control-related tasks and therefore uses a neural network for inferring the latent state directly instead of aninference subroutine.Others have been specifically interested in applying variational inference for controlled dynamicalsystems. In Watter et al. (2015) (Embed to Control—E2C), a V AE is used to learn the mappingsto and from latent space. The regularization is clearly motivated by eq. (7). Still, it fails to bea mathematically correct lower bound to the marginal data likelihood. More significantly, theirrecognition model requires all observations that contain information w.r.t. the current state. This isnothing short of an additional temporal i.i.d. assumption on data: Multiple raw samples need to bestacked into one training sample such that all latent factors (in particular all time derivatives) arepresent within one sample. The task is thus greatly simplified, because instead of time-series, welearn a static auto-encoder on the processed data.A pattern emerges: good prediction should boost compression. Still, previous methods empiricallyexcel at compression, while prediction will not work. We conjecture that this is caused by previousmethods trying to fit the latent dynamics to a latent state that is beneficial for reconstruction . Thisencourages learning of a stationary auto-encoder with focus of extracting as much from a singleobservation as possible. Importantly, it is not necessary to know the entire sequence for excellentreconstruction of single time steps. Once the latent states are set, it is hard to adjust the transition tothem. This would require changing the latent states slightly, and that comes at a cost of decreasingthe reconstruction (temporarily). The learning algorithm is stuck in a local optimum with goodreconstruction and hence good compression only. Intriguingly, E2C bypasses this problem with itsdata augmentation.3Published as a conference paper at ICLR 2017zt+1xt+1 wt vtutztt(a) Forward graphicalmodel.zt+1xt+1 wt vtutztt(b) Inference.Figure 1: Left: Graphical model for one transition under state-space model assumptions. The updatedlatent state zt+1depends on the previous state zt, control input ut, and transition parameters t.zt+1contains all information for generating observation xt+1. Diamond nodes indicate a deterministicdependency on parent nodes. Right: Inference performed during training (or while filtering). Pastobservations are indirectly used for inference as ztcontains all information about them.This leads to a key contribution of this paper: We force the latent space to fit the transition —reversingthe direction, and thus achieving the state-space model assumptions and full information in the latentstates.3 D EEPVARIATIONAL BAYES FILTERS3.1 R EPARAMETRIZING THE TRANSITIONThe central problem for learning latent states system dynamics is efficient inference of a latent spacethat obeys state-space model assumptions . If the latter are fulfilled, the latent space must contain allinformation. Previous approaches emphasized good reconstruction, so that the space only containsinformation necessary for reconstruction of one time step. To overcome this, we establish gradientpaths through transitions over time so that the transition becomes the driving factor for shaping thelatent space, rather than adjusting the transition to the recognition model’s latent space. The key is toprevent the recognition model q(z1:Tjx1:T)from directly drawing the latent state zt.Similar to the reparametrization trick from Kingma & Welling (2013); Rezende et al. (2014) for mak-ing the Monte Carlo estimate differentiable w.r.t. the parameters, we make the transition differentiablew.r.t. the last state and its parameters:zt+1=f(zt;ut;t) (8)Given the stochastic parameters t, the state transition is deterministic (which in turn means that bymarginalizing t, we still have a stochastic transition). The immediate and crucial consequence isthat errors in reconstruction of xtfromztare backpropagated directly through time.This reparametrization has a couple of other important implications: the recognition model nolonger infers latent states zt, but transition parameters t. In particular, the gradient @zt+1=@ztiswell-defined from (8)—gradient information can be backpropagated through the transition.This is different from the method used in Krishnan et al. (2015), where the transition only occurs inthe KL-divergence term of their loss function (a variant of eq. (7)). No gradient from the generativemodel is backpropagated through the transitions.Much like in eq. (5), the stochastic parameters includes a corrective offset term wt, which emphasizesthe notion of the recognition model as a filter. In theory, the learning algorithm could still learn thetransition as zt+1=wt. However, the introduction of talso enables us to regularize the transitionwith meaningful priors, which not only prevents overfitting the recognition model, but also enforcesmeaningful manifolds in the latent space via transition priors . Ignoring the potential of the transitionover time yields large penalties from these priors. Thus, the problems outlined in Section 2 areovercome by construction.To install such transition priors, we split t= (wt;vt). The interpretation of wtis a sample-specificprocess noise which can be inferred from incoming data, like in eq. (5). On the other hand, vt4Published as a conference paper at ICLR 2017q(wtj)the input/conditional is task-dependentq(vt)tq(t) =q(wtj)q(vt)transition in latent state spacezt+1=f(zt;ut;t)zt zt+1utp(xt+1jzt+1)(a) General scheme for arbitrary transitions.zt utvtwttt=f (zt;ut)(e.g., neural network)(A;B;C)t=PMi=1(i)t(A;B;C)(i)zt+1=Atzt+Btut+Ctwtzt+1(b) One particular example of a latent transition: locallinearity.Figure 2: Left: General architecture for DVBF. Stochastic transition parameters tare inferredvia the recognition model, e.g., a neural network. Based on a sampled t, the state transition iscomputed deterministically. The updated latent state zt+1is used for predicting xt+1. For details, seesection 3.1. Right: Zoom into latent space transition (red box in left figure). One exemplary transitionis shown, the locally linear transition from section 3.3.are universal transition parameters, which are sample-independent (and are only inferred from dataduring training). This corresponds to the idea of weight uncertainty in Hinton & Van Camp (1993).This interpretation leads to a natural factorization assumption on the recognition model:q(1:Tjx1:T) =q(w1:Tjx1:T)q(v1:T) (9)When using the fully trained model for generative sampling, i.e., sampling without input, the universalstate transition parameters can still be drawn from q(v1:T), whereas w1:Tis drawn from the prior inthe absence of input data.Figure 1 shows the underlying graphical model and the inference procedure. Figure 2a shows ageneric view on our new computational architecture. An example of a locally linear transitionparametrization will be given in section 3.3.3.2 T HELOWER BOUND OBJECTIVE FUNCTIONIn analogy to eq. (7), we now derive a lower bound to the marginal likelihood p(x1:Tju1:T). Afterreflecting the Markov assumptions (3) and (4) in the factorized likelihood (2), we have:p(x1:Tju1:T) =ZZp(1:T)TYt=1p(xtjzt)T1Yt=0p(zt+1jzt;ut;t) d1:Tdz1:TDue to the deterministic transition given t+1, the last term is a product of Dirac distributions andthe overall distribution simplifies greatly:p(x1:Tju1:T) =Zp(1:T)TYt=1p(xtjzt)zt=f(zt1;ut1;t1)d1:T=Zp(1:T)p(x1:Tjz1:T) d1:T5Published as a conference paper at ICLR 2017The last formulation is for notational brevity: the term p(x1:Tjz1:T)isnotindependent of 1:Tandu1:T. We now derive the objective function, a lower bound to the data likelihood:lnp(x1:Tju1:T) = lnZp(1:T)p(x1:Tjz1:T)q(1:Tjx1:T;u1:T)q(1:Tjx1:T;u1:T)d1:TZq(1:Tjx1:T;u1:T) lnp(x1:Tjz1:T)p(1:T)q(1:Tjx1:T;u1:T)d1:T=Eq[lnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) + lnp(1:T)] (10)=Eq[lnp(x1:Tjz1:T)]KL(q(1:Tjx1:T;u1:T)jjp(1:T)) (11)=:LDVBF (x1:T;;ju1:T)Our experiments show that an annealed version of (10) is beneficial to the overall performance:(100) =Eq[cilnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) +cilnp(w1:T) + lnp(v1:T)]Here,ci= max(1;0:01 +i=TA)is an inverse temperature that increases linearly in the number ofgradient updates iuntil reaching 1 after TAannealing iterations. Similar annealing schedules havebeen applied in, e.g., Ghahramani & Hinton (2000); Mandt et al. (2016); Rezende & Mohamed (2015),where it is shown that they smooth the typically highly non-convex error landscape. Additionally, thetransition prior p(v1:T)was estimated during optimization, i.e., through an empirical Bayes approach.In all experiments, we used isotropic Gaussian priors.3.3 E XAMPLE : LOCALLY LINEAR TRANSITIONSWe have derived a learning algorithm for time series with particular focus on general transitions inlatent space. Inspired by Watter et al. (2015), this section will show how to learn a particular instance:locally linear state transitions. That is, we set eq. (8) tozt+1=Atzt+Btut+Ctwt; t = 1;:::;T; (12)where wtis a stochastic sample from the recognition model and At;Bt;andCtare matrices ofmatching dimensions. They are stochastic functions of ztandut(thus local linearity). We drawvt=nA(i)t;B(i)t;C(i)tji= 1;:::;Mo;fromq(vt), i.e.,Mtriplets of matrices, each corresponding to data- independent , but learned globallylinear system. These can be learned as point estimates. We employed a Bayesian treatment as inBlundell et al. (2015). We yield At;Bt;andCtas state- and control- dependent linear combinations:At=MXi=1(i)tA(i)tt=f (zt;ut)2RMBt=MXi=1(i)tB(i)tCt=MXi=1(i)tC(i)tThe computation is depicted in fig. 2b. The function f can be, e.g., a (deterministic) neural networkwith weights . As a subset of the generative parameters , is part of the trainable parameters ofour model. The weight vector tis shared between the three matrices. There is a correspondence toeq. (5): AtandFt,BtandBt, as well as CtC>tandQtare related.We used this parametrization of the state transition model for our experiments. It is important that theparametrization is up to the user and the respective application.4 E XPERIMENTS AND RESULTSIn this section we validate that DVBF with locally linear transitions (DVBF-LL) (section 3.3)outperforms Deep Kalman Filters (DKF, Krishnan et al. (2015)) in recovering latent spaces withfull information.2We focus on environments that can be simulated with full knowledge of the2We do not include E2C, Watter et al. (2015), due to the need for data modification and its inability toprovide a correct lower bound as mentioned in section 2.2.6Published as a conference paper at ICLR 2017(a) DVBF-LL (b) DKFFigure 3: (a) Our DVBF-LL model trained on pendulum image sequences. The upper plots show thelatent space with coloring according to the ground truth with angles on the left and angular velocitieson the right. The lower plots show regression results for predicting ground truth from the latentrepresentation. The latent space plots show clearly that all information for representing the fullstate of a pendulum is encoded in each latent state. (b) DKF from Krishnan et al. (2015) trainedon the same pendulum dataset. The latent space plot shows that DKF fails to learn velocities of thependulum. It is therefore not able to capture all information for representing the full pendulum state.ground truth latent dynamical system. The experimental setup is described in the SupplementaryMaterial. We published the code for DVBF and a link will be made available at https://brml.org/projects/dvbf .4.1 D YNAMIC PENDULUMIn order to test our algorithm on truly non-Markovian observations of a dynamical system, wesimulated a dynamic torque-controlled pendulum governed by the differential equationml2'(t) =_'(t) +mglsin'(t) +u(t);m=l= 1;= 0:5;g= 9:81, via numerical integration, and then converted the ground-truth angle'into an image observation in X. The one-dimensional control corresponds to angle acceleration(which is proportional to joint torque). Angle and angular velocity fully describe the system.Figure 3 shows the latent spaces for identical input data learned by DVBF-LL and DKF, respectively,colored with the ground truth in the top row. It should be noted that latent samples are shown, notmeans of posterior distributions. The state-space model was allowed to use three latent dimensions.As we can see in fig. 3a, DVBF-LL learned a two-dimensional manifold embedding, i.e., it encodedthe angle in polar coordinates (thus circumventing the discontinuity of angles modulo 2). Thebottom row shows ordinary least-squares regressions (OLS) underlining the performance: there existsa high correlation between latent states and ground-truth angle and angular velocity for DVBF-LL.On the contrary, fig. 3b verifies our prediction that DKF is equally capable of learning the angle, butextracts little to no information on angular velocity.The OLS regression results shown in table 1 validate this observation.3Predicting sin(')andcos('),i.e., polar coordinates of the ground-truth angle ', works almost equally well for DVBF-LL and DKF,with DVBF-LL slightly outperforming DKF. For predicting the ground truth velocity _', DVBF-LL3Linear regression is a natural choice: after transforming the ground truth to polar coordinates, an affinetransformation should be a good fit for predicting ground truth from latent states. We also tried nonlinearregression with vanilla neural networks. While not being shown here, the results underlined the same conclusion.7Published as a conference paper at ICLR 2017Table 1: Results for pendulum OLS regressions of all latent states on respective dependent variable.Dependentground truthvariableDVBF-LL DKFLog-Likelihood R2Log-Likelihood R2sin(') 3990.8 0.961 1737.6 0.929cos(') 7231.1 0.982 6614.2 0.979_'11139 0.916 20289 0.035(a) Generative latent walk. (b) Reconstructive latent walk..........5 1 10 15 20 40 45(c) Ground truth (top), reconstructions (middle), generative samples (bottom) from identical initial latent state.Figure 4: (a) Latent space walk in generative mode. (b) Latent space walk in filtering mode.(c) Ground truth and samples from recognition and generative model. The reconstruction samplinghas access to observation sequence and performs filtering. The generative samples only get access tothe observations once for creating the initial state while all subsequent samples are predicted fromthis single initial state. The red bar indicates the length of training sequences. Samples beyond showthe generalization capabilities for sequences longer than during training. The complete sequence canbe found in the Appendix in fig. 7.shows remarkable performance. DKF, instead, contains hardly any information, resulting in a verylow goodness-of-fit score of R2= 0:035.Figure 4 shows that the strong relation between ground truth and latent state is beneficial for generativesampling. All plots show 100 time steps of a pendulum starting from the exact same latent state andnot being actuated. The top row plots show a purely generative walk in the latent space on the left,and a walk in latent space that is corrected by filtering observations on the right. We can see thatboth follow a similar trajectory to an attractor. The generative model is more prone to noise whenapproaching the attractor.The bottom plot shows the first 45 steps of the corresponding observations (top row), reconstructions(middle row), and generative samples (without correcting from observations). Interestingly, DVBFworks very well even though the sequence is much longer than all training sequences (indicated bythe red line).Table (2)shows values of the lower bound to the marginal data likelihood (for DVBF-LL, thiscorresponds to eq. (11)). We see that DVBF-LL outperforms DKF in terms of compression, but only8Published as a conference paper at ICLR 2017Table 2: Average test set objective function values for pendulum experiment.Lower Bound = Reconstruction Error KL divergenceDVBF-LL 798.56 802.06 3.50DKF 784.70 788.58 3.88(a) Latent walk of bouncing ball. (b) Latent space velocities.Figure 5: (a) Two dimensions of 4D bouncing ball latent space. Ground truth x and y coordinates arecombined into a regular 3 3 checkerboard coloring. This checkerboard is correctly extracted by theembedding. (b) Remaining two latent dimensions. Same latent samples, colored with ball velocitiesin x and y direction (left and right image, respectively). The smooth, perpendicular coloring indicatesthat the ground truth value is stored in the latent dimension.with a slight margin, which does not reflect the better generative sampling as Theis et al. (2015)argue.4.2 B OUNCING BALLThe bouncing ball experiment features a ball rolling within a bounding box in a plane. The systemhas a two-dimensional control input, added to the directed velocity of the ball. If the ball hits the wall,it bounces off, so that the true dynamics are highly dependent on the current position and velocity ofthe ball. The system’s state is four-dimensional, two dimensions each for position and velocity.Consequently, we use a DVBF-LL with four latent dimensions. Figure 5 shows that DVBF againcaptures the entire system dynamics in the latent space. The checkerboard is quite a remarkableresult: the ground truth position of the ball lies within the 2D unit square, the bounding box. Inorder to visualize how ground truth reappears in the learned latent states, we show the warping of theground truth bounding box into the latent space. To this end, we partitioned (discretized) the groundtruth unit square into a regular 3x3 checkerboard with respective coloring. We observed that DVBFlearned to extract the 2D position from the 256 pixels, and aligned them in two dimensions of thelatent space in strong correspondence to the physical system. The algorithm does the exact samepixel-to-2D inference that a human observer automatically does when looking at the image..........5 1 10 15 20 40 45Figure 6: Ground truth (top), reconstructions (middle), generative samples (bottom) from identicalinitial latent state for the two bouncing balls experiment. Red bar indicates length of trainingsequences.9Published as a conference paper at ICLR 20174.3 T WOBOUNCING BALLSAnother more complex environment4features two balls in a bounding box. We used a 10-dimensionallatent space to fully capture the position and velocity information of the balls. Reconstruction andgenerative samples are shown in fig. 6. Same as in the pendulum example we get a generative modelwith stable predictions beyond training data sequence length.5 C ONCLUSIONWe have proposed Deep Variational Bayes Filters (DVBF), a new method to learn state space modelsfrom raw non-Markovian sequence data. DVBFs perform latent dynamic system identification, andsubsequently overcome intractable inference. As DVBFs make use of stochastic gradient variationalBayes they naturally scale to large data sets. In a series of vision-based experiments we demonstratedthat latent states can be recovered which identify the underlying physical quantities. The generativemodel showed stable long-term predictions far beyond the sequence length used during training.ACKNOWLEDGEMENTSPart of this work was conducted at Chair of Robotics and Embedded Systems, Department ofInformatics, Technische Universität München, Germany, and supported by the TACMAN project, ECGrant agreement no. 610967, within the FP7 framework programme.We would like to thank Jost Tobias Springenberg, Adam Kosiorek, Moritz Münst, and anonymousreviewers for valuable input.REFERENCESJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprintarXiv:1411.7610 , 2014.Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty inneural networks. arXiv preprint arXiv:1505.05424 , 2015.Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings ofCOMPSTAT’2010 , pp. 177–186. Springer, 2010.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and YoshuaBengio. A recurrent latent variable model for sequential data. CoRR , abs/1506.02216, 2015. URLhttp://arxiv.org/abs/1506.02216 .Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policysearch. In Proceedings of the 28th International Conference on machine learning (ICML-11) , pp.465–472, 2011.Zoubin Ghahramani and Geoffrey E Hinton. Parameter estimation for linear dynamical systems.Technical report, Technical Report CRG-TR-96-2, University of Toronto, Dept. of ComputerScience, 1996.Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models.Neural computation , 12(4):831–864, 2000.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 ,2013.Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing thedescription length of the weights. In Proceedings of the sixth annual conference on Computationallearning theory , pp. 5–13. ACM, 1993.4We used the script attached to Sutskever & Hinton (2007) for generating our datasets.10Published as a conference paper at ICLR 2017Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, and Juha Karhunen. Approximateriemannian conjugate gradient learning for fixed-form variational bayes. Journal of MachineLearning Research , 11(Nov):3235–3268, 2010.Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams.Structured V AEs: Composing probabilistic graphical models and variational autoencoders. arXivpreprint arXiv:1603.06277 , 2016.Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. InAeroSense’97 , pp. 182–193. International Society for Optics and Photonics, 1997.Rudolph E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journalof basic engineering , 83(1):95–108, 1961.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Jonathan Ko and Dieter Fox. Learning gp-bayesfilters via gaussian process latent variable models.Autonomous Robots , 30(1):3–23, 2011.Rahul G Krishnan, Uri Shalit, and David Sontag. Deep Kalman filters. arXiv preprintarXiv:1511.05121 , 2015.Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and David Blei. Variationaltempering. In Proceedings of the 19th International Conference on Artificial Intelligence andStatistics , pp. 704–712, 2016.Kevin McGoff, Sayan Mukherjee, Natesh Pillai, et al. Statistical inference for dynamical systems: Areview. Statistics Surveys , 9:209–252, 2015.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-mate inference in deep generative models. In Tony Jebara and Eric P. Xing (eds.), Proceedingsof the 31st International Conference on Machine Learning (ICML-14) , pp. 1278–1286. JMLRWorkshop and Conference Proceedings, 2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.pdf .Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ilya Sutskever and Geoffrey E. Hinton. Learning multilevel distributed representations forhigh-dimensional sequences. In Marina Meila and Xiaotong Shen (eds.), Proceedings ofthe Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS-07) ,volume 2, pp. 548–555. Journal of Machine Learning Research - Proceedings Track, 2007.URL http://jmlr.csail.mit.edu/proceedings/papers/v2/sutskever07a/sutskever07a.pdf .Leonid Kuvayev Rich Sutton. Model-based reinforcement learning with an approximate, learnedmodel. In Proceedings of the ninth Yale workshop on adaptive and learning systems , pp. 101–105,1996.Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844 , 2015.Harri Valpola and Juha Karhunen. An unsupervised ensemble learning method for nonlinear dynamicstate-space models. Neural computation , 14(11):2647–2692, 2002.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Advances in NeuralInformation Processing Systems , pp. 2728–2736, 2015.11Published as a conference paper at ICLR 2017A S UPPLEMENTARY TO LOWER BOUNDA.1 A NNEALED KL-D IVERGENCEWe used the analytical solution of the annealed KL-divergence in eq. (10) for optimization:Eq[lnq(w1:Tjx1:T;u1:T) +cilnp(w1:T)] =ci12ln(22p)12ln(22q) +ci2q+ (qp)222p12B S UPPLEMENTARY TO IMPLEMENTATIONB.1 E XPERIMENTAL SETUPIn all our experiments, we use sequences of 15 raw images of the respective system with 16 16pixels each, i.e., observation space XR256, as well as control inputs of varying dimension andinterpretation depending on the experiment. We used training, validation and test sets with 500sequences each. Control input sequences were drawn randomly (“motor babbling”). Additionaldetails about the implementation can be found in the published code at https://brml.org/projects/dvbf .B.2 A DDITIONAL EXPERIMENT PLOTSFigure 7: Ground truth and samples from recognition and generative model. Complete version offig. 4 with all missing samples present.B.3 I MPLEMENTATION DETAILS FOR DVBF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 6 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 3 identity outputInitial Transition z1(w1): 128 ReLU + 3 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 50012Published as a conference paper at ICLR 2017B.4 I MPLEMENTATION DETAILS FOR DVBF INBOUNCING BALL EXPERIMENTInput: 15 timesteps of 162observation dimensions and 2 action dimensionLatent Space: 4 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 8 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 4 identity outputInitial Transition z1(w1): 128 ReLU + 4 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 500B.5 I MPLEMENTATION DETAILS FOR DVBF INTWOBOUNCING BALLS EXPERIMENTInput: 15 timesteps of 202observation dimensions and 2000 samplesLatent Space: 10 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 202sigmoid outputRecognition Model: 128 ReLU + 20 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 64 softmax outputInitial Network w1p(x1:T): MLP with: 128 ReLU + 10 identity outputInitial Transition z1(w1): 128 ReLU + 10 identity outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every gradient update, TA= 2 105iterationsBatch-size: 80B.6 I MPLEMENTATION DETAILS FOR DKF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);(zt)): 128 Sigmoid + 128 Sigmoid + 2 162identity outputRecognition Model: Fast Dropout BiRNN 128 Sigmoid + 128 Sigmoid + 3 identity outputTransition Network p(ztjzt1;ut1): 128 Sigmoid + 128 Sigmoid + 6 outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every 25th gradient update, TA= 2000 iterationsBatch-size: 50013
Hk3UZs-Ne
HyTqHL5xg
ICLR.cc/2017/conference/-/paper296/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes to use the very standard SVGB in a sequential setting like several previous works did. However, they proposes to have a clear state space constraints similar to Linear Gaussian Models: Markovian latent space and conditional independence of observed variables given the latent variables. However the model is in this case non-linear. These assumptions are well motivated by the goal of having meaningful latent variables.\nThe experiments are interesting but I'm still not completely convinced by the regression results in Figure 3, namely that one could obtain the angle and velocity from the state but using a function more powerful than a linear function. Also, why isn't the model from (Watter et al., 2015) not included ?\nAfter rereading I'm not sure I understand why the coordinates should be combined in a 3x3 checkerboard as said in Figure 5a. \nThen paper is well motivated and the resulting model is novel enough, the bouncing ball experiment is not quite convincing, especially in prediction, as the problem is fully determined by its initial velocity and position. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data
["Maximilian Karl", "Maximilian Soelch", "Justin Bayer", "Patrick van der Smagt"]
We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HyTqHL5xg
https://openreview.net/pdf?id=HyTqHL5xg
https://openreview.net/forum?id=HyTqHL5xg&noteId=Hk3UZs-Ne
Published as a conference paper at ICLR 2017DEEPVARIATIONAL BAYES FILTERS : UNSUPERVISEDLEARNING OF STATE SPACE MODELS FROM RAWDATAMaximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der SmagtData Lab, V olkswagen Group, 80805, München, Germanyzip([maximilian.karl, maximilian.soelch], [@volkswagen.de])ABSTRACTWe introduce Deep Variational Bayes Filters (DVBF), a new method for unsuper-vised learning and identification of latent Markovian state space models. Leverag-ing recent advances in Stochastic Gradient Variational Bayes, DVBF can overcomeintractable inference distributions via variational inference. Thus, it can handlehighly nonlinear input data with temporal and spatial dependencies such as imagesequences without domain knowledge. Our experiments show that enabling back-propagation through transitions enforces state space assumptions and significantlyimproves information content of the latent embedding. This also enables realisticlong-term prediction.1 I NTRODUCTIONEstimating probabilistic models for sequential data is central to many domains, such as audio, naturallanguage or physical plants, Graves (2013); Watter et al. (2015); Chung et al. (2015); Deisenroth &Rasmussen (2011); Ko & Fox (2011). The goal is to obtain a model p(x1:T)that best reflects a dataset of observed sequences x1:T. Recent advances in deep learning have paved the way to powerfulmodels capable of representing high-dimensional sequences with temporal dependencies, e.g., Graves(2013); Watter et al. (2015); Chung et al. (2015); Bayer & Osendorfer (2014).Time series for dynamic systems have been studied extensively in systems theory, cf. McGoff et al.(2015) and sources therein. In particular, state space models have shown to be a powerful tool toanalyze and control the dynamics. Two tasks remain a significant challenge to this day: Can weidentify the governing system from data only? And can we perform inference from observables to thelatent system variables? These two tasks are competing: A more powerful representation of systemrequires more computationally demanding inference, and efficient inference, such as the well-knownKalman filters, Kalman & Bucy (1961), can prohibit sufficiently complex system classes.Leveraging a recently proposed estimator based on variational inference, stochastic gradient varia-tional Bayes (SGVB, Kingma & Welling (2013); Rezende et al. (2014)), approximate inference oflatent variables becomes tractable. Extensions to time series have been shown in Bayer & Osendorfer(2014); Chung et al. (2015). Empirically, they showed considerable improvements in marginal datalikelihood, i.e., compression, but lack full-information latent states, which prohibits, e.g., long-termsampling. Yet, in a wide range of applications, full-information latent states should be valued overcompression. This is crucial if the latent spaces are used in downstream applications.Our contribution is, to our knowledge, the first model that (i) enforces the latent state-space modelassumptions, allowing for reliable system identification, and plausible long-term prediction of theobservable system, (ii) provides the corresponding inference mechanism with rich dependencies,(iii) inherits the merit of neural architectures to be trainable on raw data such as images or othersensory inputs, and (iv) scales to large data due to optimization of parameters based on stochasticgradient descent, Bottou (2010). Hence, our model has the potential to exploit systems theorymethodology for downstream tasks, e.g., control or model-based reinforcement learning, Sutton(1996).1Published as a conference paper at ICLR 20172 B ACKGROUND AND RELATED WORK2.1 P ROBABILISTIC MODELING AND FILTERING OF DYNAMICAL SYSTEMSWe consider non-linear dynamical systems with observations xt2X Rnx, depending on controlinputs (oractions )ut2U Rnu. Elements ofXcan be high-dimensional sensory data, e.g., rawimages. In particular they may exhibit complex non-Markovian transitions. Corresponding time-discrete sequences of length T are denoted as x1:T= (x1;x2;:::;xT)andu1:T= (u1;u2;:::;uT).We are interested in a probabilistic model1p(x1:Tju1:T). Formally, we assume the graphical modelp(x1:Tju1:T) =Zp(x1:Tjz1:T;u1:T)p(z1:Tju1:T) dz1:T; (1)where z1:T;zt2Z Rnz;denotes the corresponding latent sequence. That is, we assume a gener-ative model with an underlying latent dynamical system with emission model p(x1:Tjz1:T;u1:T)andtransition model p(z1:Tju1:T). We want to learn both components, i.e., we want to performlatent system identification . In order to be able to apply the identified system in downstream tasks, weneed to find efficient posterior inference distributions p(z1:Tjx1:T). Three common examples areprediction, filtering, and smoothing: inference of ztfromx1:t1,x1:t, orx1:T, respectively. Accurateidentification and efficient inference are generally competing tasks, as a wider generative model classtypically leads to more difficult or even intractable inference.The transition model is imperative for achieving good long-term results: a bad transition model canlead to divergence of the latent state. Accordingly, we put special emphasis on it through a Bayesiantreatment. Assuming that the transitions may differ for each time step, we impose a regularizing priordistribution on a set of transition parameters 1:T:(1)=ZZp(x1:Tjz1:T;u1:T)p(z1:Tj1:T;u1:T)p(1:T) d1:Tdz1:T (2)To obtain state-space models, we impose assumptions on emission and state transition model,p(x1:Tjz1:T;u1:T) =TYt=1p(xtjzt); (3)p(z1:Tj1:T;u1:T) =T1Yt=0p(zt+1jzt;ut;t): (4)Equations (3) and (4) assume that the current state ztcontains all necessary information about thecurrent observation xt, as well as the next state zt+1(given the current control input utand transitionparameters t). That is, in contrast to observations, ztexhibits Markovian behavior.A typical example of these assumptions are Linear Gaussian Models (LGMs), i.e., both state transitionand emission model are affine transformations with Gaussian offset noise,zt+1=Ftzt+Btut+wt wtN(0;Qt); (5)xt=Htzt+yt ytN(0;Rt): (6)Typically, state transition matrix Ftandcontrol-input matrix Btare assumed to be given, so thatt=wt. Section 3.3 will show that our approach allows other variants such as t= (Ft;Bt;wt).Under the strong assumptions (5)and(6)of LGMs, inference is provably solved optimally by thewell-known Kalman filters. While extensions of Kalman filters to nonlinear dynamical systems exist,Julier & Uhlmann (1997), and are successfully applied in many areas, they suffer from two majordrawbacks: firstly, its assumptions are restrictive and are violated in practical applications, leading tosuboptimal results. Secondly, parameters such as FtandBthave to be known in order to performposterior inference. There have been efforts to learn such system dynamics, cf. Ghahramani & Hinton(1996); Honkela et al. (2010) based on the expectation maximization (EM) algorithm or Valpola &Karhunen (2002), which uses neural networks. However, these algorithms are not applicable in cases1Throughout this paper, we consider u1:Tas given. The case without any control inputs can be recovered bysetting U=;, i.e., not conditioning on control inputs.2Published as a conference paper at ICLR 2017where the true posterior distribution is intractable. This is the case if, e.g., image sequences are used,since the posterior is then highly nonlinear—typical mean-field assumptions on the approximateposterior are too simplified. Our new approach will tackle both issues, and moreover learn bothidentification and inference jointly by exploiting Stochastic Gradient Variational Bayes.2.2 S TOCHASTIC GRADIENT VARIATIONAL BAYES (SGVB) FOR TIMESERIESDISTRIBUTIONSReplacing the bottleneck layer of a deterministic auto-encoder with stochastic units z, the variationalauto-encoder (V AE, Kingma & Welling (2013); Rezende et al. (2014)) learns complex marginal datadistributions on xin an unsupervised fashion from simpler distributions via the graphical modelp(x) =Zp(x;z) dz=Zp(xjz)p(z) dz:In V AEs,p(xjz)p(xjz)is typically parametrized by a neural network with parameters .Within this framework, models are trained by maximizing a lower bound to the marginal datalog-likelihood via stochastic gradients:lnp(x)Eq(zjx)[lnp(xjz)]KL(q(zjx)jjp(z)) =:LSGVB (x;;) (7)This is provably equivalent to minimizing the KL-divergence between the approximate posterior orrecognition model q(zjx)and the true, but usually intractable posterior distribution p(zjx).qisparametrized by a neural network with parameters .The principle of V AEs has been transferred to time series, Bayer & Osendorfer (2014); Chung et al.(2015). Both employ nonlinear state transitions in latent space, but violate eq. (4): Observationsare directly included in the transition process. Empirically, reconstruction and compression workwell. The state space Z, however, does not reflect all information available, which prohibits plausiblegenerative long-term prediction. Such phenomena with generative models have been explained inTheis et al. (2015).In Krishnan et al. (2015), the state-space assumptions (3)and(4)are softly encoded in the DeepKalman Filter (DKF) model. Despite that, experiments, cf. section 4, show that their model fails toextract information such as velocity (and in general time derivatives), which leads to similar problemswith prediction.Johnson et al. (2016) give an algorithm for general graphical model variational inference, not tailoredto dynamical systems. In contrast to previously discussed methods, it does not violate eq. (4). Theapproaches differ in that the recognition model outputs node potentials in combination with messagepassing to infer the latent state. Our approach focuses on learning dynamical systems for control-related tasks and therefore uses a neural network for inferring the latent state directly instead of aninference subroutine.Others have been specifically interested in applying variational inference for controlled dynamicalsystems. In Watter et al. (2015) (Embed to Control—E2C), a V AE is used to learn the mappingsto and from latent space. The regularization is clearly motivated by eq. (7). Still, it fails to bea mathematically correct lower bound to the marginal data likelihood. More significantly, theirrecognition model requires all observations that contain information w.r.t. the current state. This isnothing short of an additional temporal i.i.d. assumption on data: Multiple raw samples need to bestacked into one training sample such that all latent factors (in particular all time derivatives) arepresent within one sample. The task is thus greatly simplified, because instead of time-series, welearn a static auto-encoder on the processed data.A pattern emerges: good prediction should boost compression. Still, previous methods empiricallyexcel at compression, while prediction will not work. We conjecture that this is caused by previousmethods trying to fit the latent dynamics to a latent state that is beneficial for reconstruction . Thisencourages learning of a stationary auto-encoder with focus of extracting as much from a singleobservation as possible. Importantly, it is not necessary to know the entire sequence for excellentreconstruction of single time steps. Once the latent states are set, it is hard to adjust the transition tothem. This would require changing the latent states slightly, and that comes at a cost of decreasingthe reconstruction (temporarily). The learning algorithm is stuck in a local optimum with goodreconstruction and hence good compression only. Intriguingly, E2C bypasses this problem with itsdata augmentation.3Published as a conference paper at ICLR 2017zt+1xt+1 wt vtutztt(a) Forward graphicalmodel.zt+1xt+1 wt vtutztt(b) Inference.Figure 1: Left: Graphical model for one transition under state-space model assumptions. The updatedlatent state zt+1depends on the previous state zt, control input ut, and transition parameters t.zt+1contains all information for generating observation xt+1. Diamond nodes indicate a deterministicdependency on parent nodes. Right: Inference performed during training (or while filtering). Pastobservations are indirectly used for inference as ztcontains all information about them.This leads to a key contribution of this paper: We force the latent space to fit the transition —reversingthe direction, and thus achieving the state-space model assumptions and full information in the latentstates.3 D EEPVARIATIONAL BAYES FILTERS3.1 R EPARAMETRIZING THE TRANSITIONThe central problem for learning latent states system dynamics is efficient inference of a latent spacethat obeys state-space model assumptions . If the latter are fulfilled, the latent space must contain allinformation. Previous approaches emphasized good reconstruction, so that the space only containsinformation necessary for reconstruction of one time step. To overcome this, we establish gradientpaths through transitions over time so that the transition becomes the driving factor for shaping thelatent space, rather than adjusting the transition to the recognition model’s latent space. The key is toprevent the recognition model q(z1:Tjx1:T)from directly drawing the latent state zt.Similar to the reparametrization trick from Kingma & Welling (2013); Rezende et al. (2014) for mak-ing the Monte Carlo estimate differentiable w.r.t. the parameters, we make the transition differentiablew.r.t. the last state and its parameters:zt+1=f(zt;ut;t) (8)Given the stochastic parameters t, the state transition is deterministic (which in turn means that bymarginalizing t, we still have a stochastic transition). The immediate and crucial consequence isthat errors in reconstruction of xtfromztare backpropagated directly through time.This reparametrization has a couple of other important implications: the recognition model nolonger infers latent states zt, but transition parameters t. In particular, the gradient @zt+1=@ztiswell-defined from (8)—gradient information can be backpropagated through the transition.This is different from the method used in Krishnan et al. (2015), where the transition only occurs inthe KL-divergence term of their loss function (a variant of eq. (7)). No gradient from the generativemodel is backpropagated through the transitions.Much like in eq. (5), the stochastic parameters includes a corrective offset term wt, which emphasizesthe notion of the recognition model as a filter. In theory, the learning algorithm could still learn thetransition as zt+1=wt. However, the introduction of talso enables us to regularize the transitionwith meaningful priors, which not only prevents overfitting the recognition model, but also enforcesmeaningful manifolds in the latent space via transition priors . Ignoring the potential of the transitionover time yields large penalties from these priors. Thus, the problems outlined in Section 2 areovercome by construction.To install such transition priors, we split t= (wt;vt). The interpretation of wtis a sample-specificprocess noise which can be inferred from incoming data, like in eq. (5). On the other hand, vt4Published as a conference paper at ICLR 2017q(wtj)the input/conditional is task-dependentq(vt)tq(t) =q(wtj)q(vt)transition in latent state spacezt+1=f(zt;ut;t)zt zt+1utp(xt+1jzt+1)(a) General scheme for arbitrary transitions.zt utvtwttt=f (zt;ut)(e.g., neural network)(A;B;C)t=PMi=1(i)t(A;B;C)(i)zt+1=Atzt+Btut+Ctwtzt+1(b) One particular example of a latent transition: locallinearity.Figure 2: Left: General architecture for DVBF. Stochastic transition parameters tare inferredvia the recognition model, e.g., a neural network. Based on a sampled t, the state transition iscomputed deterministically. The updated latent state zt+1is used for predicting xt+1. For details, seesection 3.1. Right: Zoom into latent space transition (red box in left figure). One exemplary transitionis shown, the locally linear transition from section 3.3.are universal transition parameters, which are sample-independent (and are only inferred from dataduring training). This corresponds to the idea of weight uncertainty in Hinton & Van Camp (1993).This interpretation leads to a natural factorization assumption on the recognition model:q(1:Tjx1:T) =q(w1:Tjx1:T)q(v1:T) (9)When using the fully trained model for generative sampling, i.e., sampling without input, the universalstate transition parameters can still be drawn from q(v1:T), whereas w1:Tis drawn from the prior inthe absence of input data.Figure 1 shows the underlying graphical model and the inference procedure. Figure 2a shows ageneric view on our new computational architecture. An example of a locally linear transitionparametrization will be given in section 3.3.3.2 T HELOWER BOUND OBJECTIVE FUNCTIONIn analogy to eq. (7), we now derive a lower bound to the marginal likelihood p(x1:Tju1:T). Afterreflecting the Markov assumptions (3) and (4) in the factorized likelihood (2), we have:p(x1:Tju1:T) =ZZp(1:T)TYt=1p(xtjzt)T1Yt=0p(zt+1jzt;ut;t) d1:Tdz1:TDue to the deterministic transition given t+1, the last term is a product of Dirac distributions andthe overall distribution simplifies greatly:p(x1:Tju1:T) =Zp(1:T)TYt=1p(xtjzt)zt=f(zt1;ut1;t1)d1:T=Zp(1:T)p(x1:Tjz1:T) d1:T5Published as a conference paper at ICLR 2017The last formulation is for notational brevity: the term p(x1:Tjz1:T)isnotindependent of 1:Tandu1:T. We now derive the objective function, a lower bound to the data likelihood:lnp(x1:Tju1:T) = lnZp(1:T)p(x1:Tjz1:T)q(1:Tjx1:T;u1:T)q(1:Tjx1:T;u1:T)d1:TZq(1:Tjx1:T;u1:T) lnp(x1:Tjz1:T)p(1:T)q(1:Tjx1:T;u1:T)d1:T=Eq[lnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) + lnp(1:T)] (10)=Eq[lnp(x1:Tjz1:T)]KL(q(1:Tjx1:T;u1:T)jjp(1:T)) (11)=:LDVBF (x1:T;;ju1:T)Our experiments show that an annealed version of (10) is beneficial to the overall performance:(100) =Eq[cilnp(x1:Tjz1:T)lnq(1:Tjx1:T;u1:T) +cilnp(w1:T) + lnp(v1:T)]Here,ci= max(1;0:01 +i=TA)is an inverse temperature that increases linearly in the number ofgradient updates iuntil reaching 1 after TAannealing iterations. Similar annealing schedules havebeen applied in, e.g., Ghahramani & Hinton (2000); Mandt et al. (2016); Rezende & Mohamed (2015),where it is shown that they smooth the typically highly non-convex error landscape. Additionally, thetransition prior p(v1:T)was estimated during optimization, i.e., through an empirical Bayes approach.In all experiments, we used isotropic Gaussian priors.3.3 E XAMPLE : LOCALLY LINEAR TRANSITIONSWe have derived a learning algorithm for time series with particular focus on general transitions inlatent space. Inspired by Watter et al. (2015), this section will show how to learn a particular instance:locally linear state transitions. That is, we set eq. (8) tozt+1=Atzt+Btut+Ctwt; t = 1;:::;T; (12)where wtis a stochastic sample from the recognition model and At;Bt;andCtare matrices ofmatching dimensions. They are stochastic functions of ztandut(thus local linearity). We drawvt=nA(i)t;B(i)t;C(i)tji= 1;:::;Mo;fromq(vt), i.e.,Mtriplets of matrices, each corresponding to data- independent , but learned globallylinear system. These can be learned as point estimates. We employed a Bayesian treatment as inBlundell et al. (2015). We yield At;Bt;andCtas state- and control- dependent linear combinations:At=MXi=1(i)tA(i)tt=f (zt;ut)2RMBt=MXi=1(i)tB(i)tCt=MXi=1(i)tC(i)tThe computation is depicted in fig. 2b. The function f can be, e.g., a (deterministic) neural networkwith weights . As a subset of the generative parameters , is part of the trainable parameters ofour model. The weight vector tis shared between the three matrices. There is a correspondence toeq. (5): AtandFt,BtandBt, as well as CtC>tandQtare related.We used this parametrization of the state transition model for our experiments. It is important that theparametrization is up to the user and the respective application.4 E XPERIMENTS AND RESULTSIn this section we validate that DVBF with locally linear transitions (DVBF-LL) (section 3.3)outperforms Deep Kalman Filters (DKF, Krishnan et al. (2015)) in recovering latent spaces withfull information.2We focus on environments that can be simulated with full knowledge of the2We do not include E2C, Watter et al. (2015), due to the need for data modification and its inability toprovide a correct lower bound as mentioned in section 2.2.6Published as a conference paper at ICLR 2017(a) DVBF-LL (b) DKFFigure 3: (a) Our DVBF-LL model trained on pendulum image sequences. The upper plots show thelatent space with coloring according to the ground truth with angles on the left and angular velocitieson the right. The lower plots show regression results for predicting ground truth from the latentrepresentation. The latent space plots show clearly that all information for representing the fullstate of a pendulum is encoded in each latent state. (b) DKF from Krishnan et al. (2015) trainedon the same pendulum dataset. The latent space plot shows that DKF fails to learn velocities of thependulum. It is therefore not able to capture all information for representing the full pendulum state.ground truth latent dynamical system. The experimental setup is described in the SupplementaryMaterial. We published the code for DVBF and a link will be made available at https://brml.org/projects/dvbf .4.1 D YNAMIC PENDULUMIn order to test our algorithm on truly non-Markovian observations of a dynamical system, wesimulated a dynamic torque-controlled pendulum governed by the differential equationml2'(t) =_'(t) +mglsin'(t) +u(t);m=l= 1;= 0:5;g= 9:81, via numerical integration, and then converted the ground-truth angle'into an image observation in X. The one-dimensional control corresponds to angle acceleration(which is proportional to joint torque). Angle and angular velocity fully describe the system.Figure 3 shows the latent spaces for identical input data learned by DVBF-LL and DKF, respectively,colored with the ground truth in the top row. It should be noted that latent samples are shown, notmeans of posterior distributions. The state-space model was allowed to use three latent dimensions.As we can see in fig. 3a, DVBF-LL learned a two-dimensional manifold embedding, i.e., it encodedthe angle in polar coordinates (thus circumventing the discontinuity of angles modulo 2). Thebottom row shows ordinary least-squares regressions (OLS) underlining the performance: there existsa high correlation between latent states and ground-truth angle and angular velocity for DVBF-LL.On the contrary, fig. 3b verifies our prediction that DKF is equally capable of learning the angle, butextracts little to no information on angular velocity.The OLS regression results shown in table 1 validate this observation.3Predicting sin(')andcos('),i.e., polar coordinates of the ground-truth angle ', works almost equally well for DVBF-LL and DKF,with DVBF-LL slightly outperforming DKF. For predicting the ground truth velocity _', DVBF-LL3Linear regression is a natural choice: after transforming the ground truth to polar coordinates, an affinetransformation should be a good fit for predicting ground truth from latent states. We also tried nonlinearregression with vanilla neural networks. While not being shown here, the results underlined the same conclusion.7Published as a conference paper at ICLR 2017Table 1: Results for pendulum OLS regressions of all latent states on respective dependent variable.Dependentground truthvariableDVBF-LL DKFLog-Likelihood R2Log-Likelihood R2sin(') 3990.8 0.961 1737.6 0.929cos(') 7231.1 0.982 6614.2 0.979_'11139 0.916 20289 0.035(a) Generative latent walk. (b) Reconstructive latent walk..........5 1 10 15 20 40 45(c) Ground truth (top), reconstructions (middle), generative samples (bottom) from identical initial latent state.Figure 4: (a) Latent space walk in generative mode. (b) Latent space walk in filtering mode.(c) Ground truth and samples from recognition and generative model. The reconstruction samplinghas access to observation sequence and performs filtering. The generative samples only get access tothe observations once for creating the initial state while all subsequent samples are predicted fromthis single initial state. The red bar indicates the length of training sequences. Samples beyond showthe generalization capabilities for sequences longer than during training. The complete sequence canbe found in the Appendix in fig. 7.shows remarkable performance. DKF, instead, contains hardly any information, resulting in a verylow goodness-of-fit score of R2= 0:035.Figure 4 shows that the strong relation between ground truth and latent state is beneficial for generativesampling. All plots show 100 time steps of a pendulum starting from the exact same latent state andnot being actuated. The top row plots show a purely generative walk in the latent space on the left,and a walk in latent space that is corrected by filtering observations on the right. We can see thatboth follow a similar trajectory to an attractor. The generative model is more prone to noise whenapproaching the attractor.The bottom plot shows the first 45 steps of the corresponding observations (top row), reconstructions(middle row), and generative samples (without correcting from observations). Interestingly, DVBFworks very well even though the sequence is much longer than all training sequences (indicated bythe red line).Table (2)shows values of the lower bound to the marginal data likelihood (for DVBF-LL, thiscorresponds to eq. (11)). We see that DVBF-LL outperforms DKF in terms of compression, but only8Published as a conference paper at ICLR 2017Table 2: Average test set objective function values for pendulum experiment.Lower Bound = Reconstruction Error KL divergenceDVBF-LL 798.56 802.06 3.50DKF 784.70 788.58 3.88(a) Latent walk of bouncing ball. (b) Latent space velocities.Figure 5: (a) Two dimensions of 4D bouncing ball latent space. Ground truth x and y coordinates arecombined into a regular 3 3 checkerboard coloring. This checkerboard is correctly extracted by theembedding. (b) Remaining two latent dimensions. Same latent samples, colored with ball velocitiesin x and y direction (left and right image, respectively). The smooth, perpendicular coloring indicatesthat the ground truth value is stored in the latent dimension.with a slight margin, which does not reflect the better generative sampling as Theis et al. (2015)argue.4.2 B OUNCING BALLThe bouncing ball experiment features a ball rolling within a bounding box in a plane. The systemhas a two-dimensional control input, added to the directed velocity of the ball. If the ball hits the wall,it bounces off, so that the true dynamics are highly dependent on the current position and velocity ofthe ball. The system’s state is four-dimensional, two dimensions each for position and velocity.Consequently, we use a DVBF-LL with four latent dimensions. Figure 5 shows that DVBF againcaptures the entire system dynamics in the latent space. The checkerboard is quite a remarkableresult: the ground truth position of the ball lies within the 2D unit square, the bounding box. Inorder to visualize how ground truth reappears in the learned latent states, we show the warping of theground truth bounding box into the latent space. To this end, we partitioned (discretized) the groundtruth unit square into a regular 3x3 checkerboard with respective coloring. We observed that DVBFlearned to extract the 2D position from the 256 pixels, and aligned them in two dimensions of thelatent space in strong correspondence to the physical system. The algorithm does the exact samepixel-to-2D inference that a human observer automatically does when looking at the image..........5 1 10 15 20 40 45Figure 6: Ground truth (top), reconstructions (middle), generative samples (bottom) from identicalinitial latent state for the two bouncing balls experiment. Red bar indicates length of trainingsequences.9Published as a conference paper at ICLR 20174.3 T WOBOUNCING BALLSAnother more complex environment4features two balls in a bounding box. We used a 10-dimensionallatent space to fully capture the position and velocity information of the balls. Reconstruction andgenerative samples are shown in fig. 6. Same as in the pendulum example we get a generative modelwith stable predictions beyond training data sequence length.5 C ONCLUSIONWe have proposed Deep Variational Bayes Filters (DVBF), a new method to learn state space modelsfrom raw non-Markovian sequence data. DVBFs perform latent dynamic system identification, andsubsequently overcome intractable inference. As DVBFs make use of stochastic gradient variationalBayes they naturally scale to large data sets. In a series of vision-based experiments we demonstratedthat latent states can be recovered which identify the underlying physical quantities. The generativemodel showed stable long-term predictions far beyond the sequence length used during training.ACKNOWLEDGEMENTSPart of this work was conducted at Chair of Robotics and Embedded Systems, Department ofInformatics, Technische Universität München, Germany, and supported by the TACMAN project, ECGrant agreement no. 610967, within the FP7 framework programme.We would like to thank Jost Tobias Springenberg, Adam Kosiorek, Moritz Münst, and anonymousreviewers for valuable input.REFERENCESJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprintarXiv:1411.7610 , 2014.Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty inneural networks. arXiv preprint arXiv:1505.05424 , 2015.Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings ofCOMPSTAT’2010 , pp. 177–186. Springer, 2010.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and YoshuaBengio. A recurrent latent variable model for sequential data. CoRR , abs/1506.02216, 2015. URLhttp://arxiv.org/abs/1506.02216 .Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policysearch. In Proceedings of the 28th International Conference on machine learning (ICML-11) , pp.465–472, 2011.Zoubin Ghahramani and Geoffrey E Hinton. Parameter estimation for linear dynamical systems.Technical report, Technical Report CRG-TR-96-2, University of Toronto, Dept. of ComputerScience, 1996.Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models.Neural computation , 12(4):831–864, 2000.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 ,2013.Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing thedescription length of the weights. In Proceedings of the sixth annual conference on Computationallearning theory , pp. 5–13. ACM, 1993.4We used the script attached to Sutskever & Hinton (2007) for generating our datasets.10Published as a conference paper at ICLR 2017Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, and Juha Karhunen. Approximateriemannian conjugate gradient learning for fixed-form variational bayes. Journal of MachineLearning Research , 11(Nov):3235–3268, 2010.Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams.Structured V AEs: Composing probabilistic graphical models and variational autoencoders. arXivpreprint arXiv:1603.06277 , 2016.Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. InAeroSense’97 , pp. 182–193. International Society for Optics and Photonics, 1997.Rudolph E Kalman and Richard S Bucy. New results in linear filtering and prediction theory. Journalof basic engineering , 83(1):95–108, 1961.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Jonathan Ko and Dieter Fox. Learning gp-bayesfilters via gaussian process latent variable models.Autonomous Robots , 30(1):3–23, 2011.Rahul G Krishnan, Uri Shalit, and David Sontag. Deep Kalman filters. arXiv preprintarXiv:1511.05121 , 2015.Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and David Blei. Variationaltempering. In Proceedings of the 19th International Conference on Artificial Intelligence andStatistics , pp. 704–712, 2016.Kevin McGoff, Sayan Mukherjee, Natesh Pillai, et al. Statistical inference for dynamical systems: Areview. Statistics Surveys , 9:209–252, 2015.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-mate inference in deep generative models. In Tony Jebara and Eric P. Xing (eds.), Proceedingsof the 31st International Conference on Machine Learning (ICML-14) , pp. 1278–1286. JMLRWorkshop and Conference Proceedings, 2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.pdf .Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ilya Sutskever and Geoffrey E. Hinton. Learning multilevel distributed representations forhigh-dimensional sequences. In Marina Meila and Xiaotong Shen (eds.), Proceedings ofthe Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS-07) ,volume 2, pp. 548–555. Journal of Machine Learning Research - Proceedings Track, 2007.URL http://jmlr.csail.mit.edu/proceedings/papers/v2/sutskever07a/sutskever07a.pdf .Leonid Kuvayev Rich Sutton. Model-based reinforcement learning with an approximate, learnedmodel. In Proceedings of the ninth Yale workshop on adaptive and learning systems , pp. 101–105,1996.Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generativemodels. arXiv preprint arXiv:1511.01844 , 2015.Harri Valpola and Juha Karhunen. An unsupervised ensemble learning method for nonlinear dynamicstate-space models. Neural computation , 14(11):2647–2692, 2002.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Advances in NeuralInformation Processing Systems , pp. 2728–2736, 2015.11Published as a conference paper at ICLR 2017A S UPPLEMENTARY TO LOWER BOUNDA.1 A NNEALED KL-D IVERGENCEWe used the analytical solution of the annealed KL-divergence in eq. (10) for optimization:Eq[lnq(w1:Tjx1:T;u1:T) +cilnp(w1:T)] =ci12ln(22p)12ln(22q) +ci2q+ (qp)222p12B S UPPLEMENTARY TO IMPLEMENTATIONB.1 E XPERIMENTAL SETUPIn all our experiments, we use sequences of 15 raw images of the respective system with 16 16pixels each, i.e., observation space XR256, as well as control inputs of varying dimension andinterpretation depending on the experiment. We used training, validation and test sets with 500sequences each. Control input sequences were drawn randomly (“motor babbling”). Additionaldetails about the implementation can be found in the published code at https://brml.org/projects/dvbf .B.2 A DDITIONAL EXPERIMENT PLOTSFigure 7: Ground truth and samples from recognition and generative model. Complete version offig. 4 with all missing samples present.B.3 I MPLEMENTATION DETAILS FOR DVBF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 6 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 3 identity outputInitial Transition z1(w1): 128 ReLU + 3 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 50012Published as a conference paper at ICLR 2017B.4 I MPLEMENTATION DETAILS FOR DVBF INBOUNCING BALL EXPERIMENTInput: 15 timesteps of 162observation dimensions and 2 action dimensionLatent Space: 4 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 162identity outputRecognition Model: 128 ReLU + 8 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 16 softmax outputInitial Network w1p(x1:T): Fast Dropout BiRNN with: 128 ReLU + 4 identity outputInitial Transition z1(w1): 128 ReLU + 4 identity outputOptimizer: adadelta, 0.1 step rateInverse temperature: c0= 0:01, updated every 250th gradient update, TA= 105iterationsBatch-size: 500B.5 I MPLEMENTATION DETAILS FOR DVBF INTWOBOUNCING BALLS EXPERIMENTInput: 15 timesteps of 202observation dimensions and 2000 samplesLatent Space: 10 dimensionsObservation Network p(xtjzt) =N(xt;(zt);): 128 ReLU + 202sigmoid outputRecognition Model: 128 ReLU + 20 identity outputq(wtjzt;xt+1;ut) =N(wt;;);(;) =f(zt;xt+1;ut)Transition Network t(zt): 64 softmax outputInitial Network w1p(x1:T): MLP with: 128 ReLU + 10 identity outputInitial Transition z1(w1): 128 ReLU + 10 identity outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every gradient update, TA= 2 105iterationsBatch-size: 80B.6 I MPLEMENTATION DETAILS FOR DKF INPENDULUM EXPERIMENTInput: 15 timesteps of 162observation dimensions and 1 action dimensionLatent Space: 3 dimensionsObservation Network p(xtjzt) =N(xt;(zt);(zt)): 128 Sigmoid + 128 Sigmoid + 2 162identity outputRecognition Model: Fast Dropout BiRNN 128 Sigmoid + 128 Sigmoid + 3 identity outputTransition Network p(ztjzt1;ut1): 128 Sigmoid + 128 Sigmoid + 6 outputOptimizer: adam, 0.001 step rateInverse temperature: c0= 0:01, updated every 25th gradient update, TA= 2000 iterationsBatch-size: 50013
rJ9_WaKEx
S1Bm3T_lg
ICLR.cc/2017/conference/-/paper65/official/review
{"title": "An alternative to convolutional neural networks (early stage)", "rating": "6: Marginally above acceptance threshold", "review": "Thank you for an interesting read. The ideas presented have a good basis of being true, but the experiments are rather too simple. It would be interesting to see more empirical evidence.\n\nPros\n- The approach seems to decrease the training time, which is of prime importance in deep learning. Although, that comes at a price of slightly more complex model.\n- There is a grounded theory for sum-product functions which is basis for the compositional architecture described in the paper. Theoretically, any semiring and kernel could be used for the model which decreases need for handcrafting the structure of the model, which is a big problem in existing convolutional neural networks.\n\nCons\n- The experiments are on very simple dataset NORB. Although, it is great to understand a model's dynamics on a simpler dataset, some analysis on complex datasets are important to act as empirical evidence. The compositional kernel approach is compared to convolutional neural networks, hence it is only fair to compare said results on large datasets such as Imagenet.\n\nMinor\n- Section 3.4 claims that CKMs model symmetries of objects. It felt that ample justification was not provided for this claim", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Compositional Kernel Machines
["Robert Gens", "Pedro Domingos"]
Convolutional neural networks (convnets) have achieved impressive results on recent computer vision benchmarks. While they benefit from multiple layers that encode nonlinear decision boundaries and a degree of translation invariance, training convnets is a lengthy procedure fraught with local optima. Alternatively, a kernel method that incorporates the compositionality and symmetry of convnets could learn similar nonlinear concepts yet with easier training and architecture selection. We propose compositional kernel machines (CKMs), which effectively create an exponential number of virtual training instances by composing transformed sub-regions of the original ones. Despite this, CKM discriminant functions can be computed efficiently using ideas from sum-product networks. The ability to compose virtual instances in this way gives CKMs invariance to translations and other symmetries, and combats the curse of dimensionality. Just as support vector machines (SVMs) provided a compelling alternative to multilayer perceptrons when they were introduced, CKMs could become an attractive approach for object recognition and other vision problems. In this paper we define CKMs, explore their properties, and present promising results on NORB datasets. Experiments show that CKMs can outperform SVMs and be competitive with convnets in a number of dimensions, by learning symmetries and compositional concepts from fewer samples without data augmentation.
["Computer vision", "Supervised Learning"]
https://openreview.net/forum?id=S1Bm3T_lg
https://openreview.net/pdf?id=S1Bm3T_lg
https://openreview.net/forum?id=S1Bm3T_lg&noteId=rJ9_WaKEx
Under review as a conference paper at ICLR 2017COMPOSITIONAL KERNEL MACHINESRobert Gens & Pedro DomingosDepartment of Computer Science & EngineeringUniversity of WashingtonSeattle, WA 98195, USAfrcg,pedrodg@cs.washington.eduABSTRACTConvolutional neural networks (convnets) have achieved impressive results on re-cent computer vision benchmarks. While they benefit from multiple layers that en-code nonlinear decision boundaries and a degree of translation invariance, trainingconvnets is a lengthy procedure fraught with local optima. Alternatively, a kernelmethod that incorporates the compositionality and symmetry of convnets couldlearn similar nonlinear concepts yet with easier training and architecture selec-tion. We propose compositional kernel machines (CKMs), which effectively cre-ate an exponential number of virtual training instances by composing transformedsub-regions of the original ones. Despite this, CKM discriminant functions canbe computed efficiently using ideas from sum-product networks. The ability tocompose virtual instances in this way gives CKMs invariance to translations andother symmetries, and combats the curse of dimensionality. Just as support vec-tor machines (SVMs) provided a compelling alternative to multilayer perceptronswhen they were introduced, CKMs could become an attractive approach for objectrecognition and other vision problems. In this paper we define CKMs, exploretheir properties, and present promising results on NORB datasets. Experimentsshow that CKMs can outperform SVMs and be competitive with convnets in anumber of dimensions, by learning symmetries and compositional concepts fromfewer samples without data augmentation.1 I NTRODUCTIONThe depth of state-of-the-art convnets is a double-edged sword: it yields both nonlinearity for so-phisticated discrimination and nonconvexity for frustrating optimization. The established trainingprocedure for ILSVRC classification cycles through the million-image training set more than fiftytimes, requiring substantial stochasticity, data augmentation, and hand-tuned learning rates. On to-day’s consumer hardware, the process takes several days. However, performance depends heavilyon hyperparameters, which include the number and connections of neurons as well as optimizationdetails. Unfortunately, the space of hyperparameters is unbounded, and each configuration of hyper-parameters requires the aforementioned training procedure. It is no surprise that large organizationswith enough computational power to conduct this search dominate this task.Yet mastery of object recognition on a static dataset is not enough to propel robotics and internet-scale applications with ever-growing instances and categories. Each time the training set is modified,the convnet must be retrained (“fine-tuned”) for optimum performance. If the training set growslinearly with time, the total training computation grows quadratically.We propose the Compositional Kernel Machine (CKM), a kernel-based visual classifier that has thesymmetry and compositionality of convnets but with the training benefits of instance-based learning(IBL). CKMs branch from the original instance-based methods with virtual instances , an exponen-tial set of plausible compositions of training instances. The first steps in this direction are promisingcompared to IBL and deep methods, and future work will benefit from over fifty years of researchinto nearest neighbor algorithms, kernel methods, and neural networks.In this paper we first define CKMs, explore their formal and computational properties, and comparethem to existing kernel methods. We then propose a key contribution of this work: a sum-productfunction (SPF) that efficiently sums over an exponential number of virtual instances. We then de-1Under review as a conference paper at ICLR 2017scribe how to train the CKM with and without parameter optimization. Finally, we present resultson NORB and variants that show a CKM trained on a CPU can be competitive with convnets trainedfor much longer on a GPU and can outperform them on tests of composition and symmetry, as wellas markedly improving over previous IBL methods.2 C OMPOSITIONAL KERNEL MACHINESThe key issue in using an instance-based learner on large images is the curse of dimensionality. Evenmillions of training images are not enough to construct a meaningful neighborhood for a 256256pixel image. The compositional kernel machine (CKM) addresses this issue by constructing an ex-ponential number of virtual instances . The core hypothesis is that a variation of the visual world canbe understood as a rearrangement of low-dimensional pieces that have been seen before. For exam-ple, an image of a house could be recognized by matching many pieces from other images of housesfrom different viewpoints. The virtual instances represent this set of all possible transformationsand recombinations of the training images. The arrangement of these pieces cannot be arbitrary, soCKMs learn how to compose virtual instances with weights on compositions. A major contributionof this work is the ability to efficiently sum over this set with a sum-product function.The set of virtual instances is related to the nonlinear image manifolds described by Simard et al.(1992) but with key differences. Whereas the tangent distance accounts for transformations appliedto the whole image, virtual instances can depict local transformations that are applied differentlyacross an image. Secondly, the tangent plane approximation of the image manifold is only accuratenear the training images. Virtual instances can easily represent distant transformations. Unlike theexplicit augmentation of virtual support vectors in Sch ̈olkopf et al. (1996), the set of virtual instancesin a CKM is implicit and exponentially larger. Platt & Allen (1996) demonstrated an early versionof virtual instances to expand the set of negative examples for a linear classifier.2.1 D EFINITIONWe define CKMs using notation common to other IBL techniques. The two prototypical instance-based learners are k-nearest neighbors and support vector machines. The foundation for both algo-rithms is a similarity or kernel function K(x;x0)between two instances. Given a training set of mlabeled instances of the form hxi;yiiand queryxq, thek-NN algorithm outputs the most commonlabel of theknearest instances:ykNN(xq) = arg maxcmXi=11c=yi^K(xi;xq)K(xk;xq)where 1[]equals one if its argument is true and zero otherwise, and xkis thekthnearest traininginstance to query xqassuming unique distances. The multiclass support vector machine (Crammer& Singer, 2001) in its dual form can be seen as a weighted nearest neighbor that outputs the classwith the highest weighted sum of kernel values with the query:ySVM(xq) = arg maxcmXi=1i;cK(xi;xq) (1)wherei;cis the weight on training instance xithat contributes to the score of class c.The CKM performs the same classification as these instance-based methods but it sums over an ex-ponentially larger set of virtual instances to mitigate the curse of dimensionality. Virtual instancesare composed of rearranged elements from one or more training instances. Depending on the de-sign of the CKM, elements can be subsets of instance variables (e.g., overlapping pixel patches) orfeatures thereof (e.g., ORB features or a 2D grid of convnet feature vectors). We assume there is adeterministic procedure that processes each training or test instance xiinto a fixed tuple of indexedelementsExi= (ei;1; :::; e i;jExij), where instances may have different numbers of elements. Thequery instance xq(with tuple of elements Exq) is the example that is being classified by the CKM;it is a training instance during training and a test instance during testing. A virtual instance zisrepresented by a tuple of elements from training instances, e.g. Ez= (e10;5; e71;2; :::; e 46;17).Given a query instance xq, the CKM represents a set of virtual instances each with the same numberof elements as Exq. We define a leaf kernel KL(ei;j;ei0;j0)that measures the similarity between anytwo elements. Using kernel composition (Aronszajn, 1950), we define the kernel between the queryinstancexqand a virtual instance zas the product of leaf kernels over their corresponding elements:K(z;xq) =QjExqjjKL(ez;j;eq;j).2Under review as a conference paper at ICLR 2017We combine leaf kernels with weighted sums and products to compactly represent a sum over kernelswith an exponential number of virtual instances. Just as a sum-product network can compactly rep-resent a mixture model that is a weighted sum over an exponential number of mixture components,the same algebraic decomposition can compactly encode a weighted sum over an exponential num-ber of kernels. For example, if the query instance is represented by two elements Exq= (eq;1; eq;2)and the training set contains elements fe1; e2; e3; e4; e5; e6g, then[w1KL(eq;1;e1) +w2KL(eq;1;e2) +w3KL(eq;1;e3)][w4KL(eq;2;e4) +w5KL(eq;2;e5) +w6KL(eq;2;e6)]expresses a weighted sum over nine virtual instances using eleven additions/multiplications in-stead of twenty-six for an expanded flat sum w1KL(eq;1;e1)KL(eq;2;e4) +:::+w9KL(eq;1;e3)KL(eq;2;e6). If the query instance and training set contained 100 and 10000 elements, respectively,then a similar factorization would use O(106)operations compared to a na ̈ıve sum over 10500virtualinstances. Leveraging the Sum-Product Theorem (Friesen & Domingos, 2016), we define CKMs toallow for more expressive architectures with this exponential computational savings.Definition 1. A compositional kernel machine (CKM) is defined recursively.1. A leaf kernel over a query element and a training set element is a CKM.2. A product of CKMs with disjoint scopes is a CKM.3. A weighted sum of CKMs with the same scope is a CKM.The scope of an operator is the set of query elements it takes as inputs; it is analogous to the receptivefield of a unit in a neural network, but with CKMs the query elements are not restricted to beingpixels on the image grid (e.g., they may be defined as a set of extracted image features). A leafkernel has singleton scope, internal nodes have scope over some subset of the query elements, andthe root node of the CKM has full scope of all query elements Exq. This definition allows forrich CKM architectures with many layers to represent elaborate compositions. The value of eachsum node child is multiplied by a weight wk;cand optionally a constant cost function (ei;j;ei0;j0)that rewards certain compositions of elements. Analogous to a multiclass SVM, the CKM has aseparate set of weights for each class cin the dataset. The CKM classifies a query instance asyCKM(xq) = arg maxcSc(xq), whereSc(xq)is the value of the root node of the CKM evaluatingquery instance xqusing weights for class c.Definition 2 (Friesen & Domingos (2016)) .A product node is decomposable iff the scopes of itschildren are disjoint. An SPF is decomposable iff all of its product nodes are decomposable.Theorem 1 (Sum-Product Theorem, Friesen & Domingos (2016)) .Every decomposable SPF canbe summed over its domain in time linear in its size.Corollary 1. Sc(xq)can sum over the set of virtual instances in time linear in the size of the SPF .Proof. For each query instance element eq;jwe define a discrete variable Zjwith a state for eachtraining element ei0;j0found in a leaf kernel KL(eq;j;ei0;j0)in the CKM. The Cartesian product ofthe domains of the variables Zdefines the set of virtual instances represented by the CKM. Sc(xq)is a SPF over semiring (R;;;0;1), variablesZ, constant functions wand, and univariatefunctionsKL(eq;j;Zj). With the appropriate definition of leaf kernels, any semiring can be used.The definition above provides that the children of every product node have disjoint scopes. Constantfunctions have empty scope so there is no intersection with scopes of other children. With all productnodes decomposable, Sc(xq)is a decomposable SPF and can therefore sum over all states of Z, thevirtual instances, in time linear to the size of the CKM.Special cases of CKMs include multiclass SVMs (flat sum-of-products) and naive Bayes nearestneighbor (Boiman et al., 2008) (flat product-of-sums). A CKM can be seen as a generalization ofan image grammar (Fu, 1974) where terminal symbols corresponding to pieces of training imagesare scored with kernels and non-terminal symbols are sum nodes with a production for each childproduct node.The weights and cost functions of the CKM control the weights on the virtual instances. Eachvirtual instance represented by the CKM defines a tree that connects the root to the leaf kernelsover its unique composition of training set elements. If we were to expand the CKM into a flatsum (cf. Equation 1), the weight on a virtual instance would be the product of the weights and costfunctions along the branches of its corresponding tree. These weights are important as they canprevent implausible virtual instances. For example, if we use image patches as the elements andallow all compositions, the set of virtual instances would largely contain nonsense noise patterns. If3Under review as a conference paper at ICLR 2017the elements were pixels, the virtual instances could even contain arbitrary images from classes notpresent in the training set. There are many aspects of composition that can be encoded by the CKM.For example, we can penalize virtual instances that compose training set elements using differentsymmetry group transformations. We could also penalize compositions that juxtapose elements thatdisagree on the contents of their borders. Weights can be learned to establish clusters of elements andreward certain arrangements. In Section 3 we demonstrate one choice of weights and cost functionsin a CKM architecture built from extracted image features.2.2 L EARNINGThe training procedure for a CKM builds an SPF that encodes the virtual instances. There are thentwo options for how to set weights in the model. As with k-NN, the weights in the CKM could be setto uniform. Alternatively, as with SVMs, the weights could be optimized to improve generalizationand reduce model size.For weight learning, we use block-coordinate gradient descent to optimize leave-one-out loss overthe training set. The leave-one-out loss on a training instance xiis the loss on that instance made bythe learner trained on all data except xi. Though it is an almost unbiased estimate of generalizationerror (Luntz & Brailovsky, 1969), it is typically too expensive to compute or optimize with non-IBLmethods (Chapelle et al., 2002). With CKMs, caching the SPFs and efficient data structures makeit feasible to compute exact partial derivatives of the leave-one-out loss over the whole training set.We use a multiclass squared-hinge lossL(xi;yi) = max2641 +Sy0(xi)|{z}Best incorrect classSyi(xi)|{z}True class;03752for the loss on training instance xiwith true label yiand highest-scoring incorrect class y0. Weuse the squared version of the hinge loss as it performs better empirically and prioritizes updatesto element weights that led to larger margin violations. In general, this objective is not convex asit involves the difference of the two discriminant functions which are strictly convex (due to thechoice of semiring and the product of weights on each virtual instance). In the special case of thesum-product semiring and unique weights on virtual instances the objective is convex as is true forL2-SVMs. Convnets also have a non-convex objective, but they require lengthy optimization toperform well. As we show in Section 3, CKMs can achieve high accuracy with uniform weights,which further serves as good initialization for gradient descent.For each epoch, we iterate through the training set, for each training instance xioptimizing the blockof weights on those branches with Exias descendants. We take gradient steps to lower the leave-one-out loss over the rest of the training setPi02([1;m]ni)L(xi0;yi0). We iterate until convergence oran early stopping condition. A component of the gradient of the squared-hinge loss on an instancetakes the form@@wk;cL(xi;yi) =8><>:2(xi;yi)@Sy0(xi)@wk;cif(xi;yi)>0^c=y02(xi;yi)@Syi(xi)@wk;cif(xi;yi)>0^c=yi0 otherwisewhere (xi;yi) = 1 +Sy0(xi)Syi(xi). We compute partial derivatives@Sc(xi)@wk;cwith backprop-agation through the SPF. For efficiency, terms of the gradient can be set to zero and excluded frombackpropagation if the values of corresponding leaf kernels are small enough. This is either exact(e.g., ifis maximization) or an approximation (e.g., if is normal addition).2.3 S CALABILITYCKMs have several scalability advantages over convnets. As mentioned previously, they do notrequire a lengthy training procedure. This makes it much easier to add new instances and categories.Whereas most of the computation to evaluate a single setting of convnet hyperparameters is sunk intraining, CKMs can efficiently race hyperparameters on hold-out data (Lee & Moore, 1994).The evaluation of the CKM depends on the structure of the SPF, the size of the training set, andthe computer architecture. A basic building block of these SPFs is a sum node with a numberof children on the order of magnitude of the training set elements jEj. On a sufficiently parallel4Under review as a conference paper at ICLR 2017Table 1: Dataset propertiesName #Training Exs. - #Testing Exs. Dimensions ClassesSmall NORB 24300-24300 9696 5NORB Compositions 100-1000 256256 2NORB Symmetries f50;100;:::; 12800g-2916 108108 6computer, assuming the size of the training set elements greatly exceeds the dimensionality of theleaf kernel, this sum node will require O(log(jEj))time (the depth of a parallel reduction circuit)andO(jEj)space. Duda et al. (2000) describe a constant time nearest neighbor circuit that relies onprecomputed V oronoi partitions, but this has impractical space requirements in high dimensions. Aswith SVMs, optimization of sparse element weights can greatly reduce model size.On a modest multicore computer, we must resort to using specialized data structures. Hash codescan be used to index raw features or to measure Hamming distance as a proxy to more expensivedistance functions. While they are perhaps the fastest method to accelerate a nearest neighbor search,the most accurate hashing methods involve a training period yet do not necessarily result in highrecall (Torralba et al., 2008; Heo et al., 2012). There are many space-partitioning data structuretrees in the literature, however in practice none are able to offer exact search of nearest neighbors inhigh dimensions in logarithmic time. In our experiments we use hierarchical k-means trees (Muja& Lowe, 2009), which are a good compromise between speed and accuracy.3 E XPERIMENTSWe test CKMs on three image classification scenarios that feature images from either the smallNORB dataset or the NORB jittered-cluttered dataset (LeCun et al., 2004). Both NORB datasetscontain greyscale images of five categories of plastic toys photographed with varied altitudes, az-imuths, and lighting conditions. Table 1 summarizes the datasets. We first describe the SPN archi-tecture and then detail each of the three scenarios.3.1 E XPERIMENTAL ARCHITECTUREIn our experiments the architecture of the SPF Sc(xq)for each query image is based on its uniqueset of extracted ORB features. Like SIFT features, ORB features are rotation-invariant and producea descriptor from intensity differences, but ORB is much faster to compute and thus suitable for realtime applications (Rublee et al., 2011). The elements Exi= (ei;1;:::;e i;jEij)of each image xiareits extracted keypoints, where an element’s feature vector and image position are denoted by ~f(ei;j)and~ p(ei;j)respectively. We use the max-sum semiring ( = max ,= + ) because it is morerobust to noisy virtual instances, yields sparser gradients, is more efficient to compute, and performsbetter empirically compared with the sum-product semiring.The SPFSc(xq)maximizes over variables Z= (Z1;:::;Z jExqj)corresponding to query elementsExqwith states for all possible virtual instances. The SPF contains a unary scope max node forevery variablefZjgthat maximizes over the weighted kernels of all possible training elements E:(Zj) =Lzj2Ewzj;cKL(zj;eq;j). The SPF contains a binary scope max node for all pairsof variablesfZj;Zj0gfor which at least one corresponding query element is within the k-nearestspatial neighbors of the other. These nodes maximize over the weighted kernels of all possiblecombinations of training set elements.(Zj;Zj0) =Mzj2EMzj02Ewzj;cwzj0;c(zj;zj0)KL(zj;eq;j)KL(zj0;eq;j0) (2)This maximizes over all possible pairs of training set elements, weighting the two leaf kernelsby two corresponding element weights and a cost function. We use a leaf kernel for image ele-ments that incorporates both the Hamming distance between their features and the Euclidean dis-tance between their image positions: KL(ei;j;ei0;j0) = max(01dHam(~f(ei;j);~f(ei0;j0));0) +max(2jj(~ p(ei;j);~ p(ei0;j0)jj;3). This rewards training set elements that look like a query instanceelement and appear in a similar location, with thresholds for efficiency. This can represent, for ex-ample, the photographic bias to center foreground objects or a discriminative cue from seeing skyat the top of the image. We use the pairwise cost function (ei;j;ei0;j0) =1[i=i0]4that rewardscombinations of elements from the same source training image. This captures the intuition that5Under review as a conference paper at ICLR 2017compositions sourced from more images are less coherent and more likely to contain nonsense thanthose using fewer. The image is represented as a sum of these unary and binary max nodes. Thescopes of children of the sum are restricted to be disjoint, so the children f(Z1;Z2);(Z2;Z3)gwould be disallowed, for example. This restriction is what allows the SPF to be tractable, and withmultiple sums the SPF has high-treewidth. By comparison, a Markov random field expressing thesedependencies would be intractable. The root max node of the SPF has Psums as children, each ofwhich has its random set of unary and binary scope max node children that cover full scope Z. Weillustrate a simplified version of the SPF architecture in Figure 1. Though this SPF models limitedimage structure, the definition of CKMs allows for more expressive architectures as with SPNs.++query imageKLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|eq,1eq,2eq,3eq,4++++...w1,1wm,|Em|w1,2++++...w1,1wm,|Em|w1,2++++...w1,1w1,1wm,|Em|w1,2w1,1wm,|Em|++++...++++...+...+{Z1}{Z2,Z3}{Z4}Z={Z1,Z2,Z3,Z4}Figure 1: Simplified illustration of the SPF Sc(xq)architecture with max-sum semiring used inexperiments (using ORB features as elements, jExqj100). Red dots depict elements Exqof queryinstancexq. Blue dots show training set elements ei;j2E, duplicated with each query element forclarity. A boxed KLshows the leaf kernel with lines descending to its two element arguments. Thenodes are labeled with their scopes. Weights and cost functions (arguments omitted) appear nexttonodes. Only a subset of the unary and binary scope nodes are drawn. Only two of the Ptop-levelnodes are fully detailed (the children of the second are drawn faded).In the following sections, we refer to two variants CKM andCKM W. The CKM version usesuniform weights wk;c, similar to the basic k-nearest neighbor algorithm. The CKM Wmethod opti-mizes weights wk;cas described in Section 2.2. Both versions restrict weights for class cto be1(identity) for those training elements not in class c. This constraint ensures that method CKM isdiscriminative (as is true with k-NN) and reduces the number of parameters optimized by CKM W.The hyperparameters of ORB feature extraction, leaf kernels, cost function, and optimization werechosen using grid search on a validation set.With our CPU implementation, CKM trains in a single pass of feature extraction and storageat5ms/image, CKM Wtrains in under ten epochs at 90ms/image, and both versions test at80ms/image. The GPU-optimized convnets train at 2ms/image for many epochs and test at1ms/image. Remarkably, CKM on a CPU trains faster than the convnet on a GPU.3.2 S MALL NORBWe use the original train-test separation which measures generalization to new instances of a cate-gory (i.e. tested on toy truck that is different from the toys it was trained on). We show promisingresults in Table 2 comparing CKMs to deep and IBL methods. With improvement over k-NN andSVM, the CKM andCKM Wresults show the benefit of using virtual instances to combat the curseof dimensionality. We note that the CKM variant that does not optimize weights performs nearlyas well as the CKM Wversion that does. Since the test set uses a different set of toys, the use ofuntrained ORB features hurts the performance of the CKM. Convnets have an advantage here be-cause they discriminatively train their lowest level of features and represent richer image structure intheir architecture. To become competitive, future work should improve upon this preliminary CKM6Under review as a conference paper at ICLR 2017Table 2: Accuracy on Small NORBMethod AccuracyConvnet (14 epochs) (Bengio & LeCun, 2007) 94:0%DBM with aug. training (Salakhutdinov & Hinton, 2009) 92:8%CKM W 89:8%Convnet (2 epochs) (Bengio & LeCun, 2007) 89:6%DBM (Salakhutdinov & Hinton, 2009) 89:2%SVM (Gaussian kernel) (Bengio & LeCun, 2007) 88:4%CKM 88:3%k-NN (LeCun et al., 2004) 81:6%Logistic regression (LeCun et al., 2004) 77:5%Table 3: Accuracy on NORB CompositionsMethod Accuracy Train+Test (min)CKM 82:4% 1.5 [CPU]SVM with convnet features 75:0% 1 [GPU+CPU]Convnet 50:6% 9 [GPU]k-NN on image pixels 51:2% 0.2 [CPU]architecture. We demonstrate the advantage of CKMs for representing composition and symmetryin the following experiments.3.3 NORB C OMPOSITIONSA general goal of representation learning is to disentangle the factors of variation of a signal withouthaving to see those factors in all combinations. To evaluate progress towards this, we created imagescontaining three toys each, sourced from the small NORB training set. Small NORB contains tentypes of each toy category (e.g., ten different airplanes), which we divided into two collections. Eachimage is generated by choosing one of the collections uniformly and for each of three categories(person, airplane, animal) randomly sampling a toy from that collection with higher probability(P=56) than from the other collection (i.e., there are two children with disjoint toy collectionsbut they sometimes borrow). The task is to determine which of the two collections generated theimage. This dataset measures whether a method can distinguish different compositions withouthaving seen all possible permutations of those objects through symmetries and noisy intra-classvariation. Analogous tasks include identifying people by their clothing, recognizing social groupsby their members, and classifying cuisines by their ingredients.We compare CKMs to other methods in Table 3. Convnets and their features are computed using theTensorFlow library (Abadi et al., 2015). Training convnets from few images is very difficult withoutresorting to other datasets; we augment the training set with random crops, which still yields testaccuracy near chance. In such situations it is common to train an SVM with features extracted bya convnet trained on a different, larger dataset. We use 2048-dimensional features extracted fromthe penultimate layer of the pre-trained Inception network (Szegedy et al., 2015) and a linear kernelSVM with squared-hinge loss (Pedregosa et al., 2011). Notably, the CKM is much more accuratethan the deep methods, and it is about as fast as the SVM despite not taking advantage of the GPU.Figure 2: Images from NORB Compositions3.4 NORB S YMMETRIESComposition is a useful tool for modeling the symmetries of objects. When we see an image of anobject in a new pose, parts of the image may look similar to parts of images of the object in poses wehave seen before. In this experiment, we partition the training set of NORB jittered-cluttered into a7Under review as a conference paper at ICLR 2017new dataset with 10% withheld for each of validation and testing. Training and testing on the samegroup of toy instances, this measures the ability to generalize to new angles, lighting conditions,backgrounds, and distortions.We vary the amount of training data to plot learning curves in Figure 3. We observe that CKMs arebetter able to generalize to these distortions than other methods, especially with less data. Impor-tantly, the performance of CKM improves with more data, without requiring costly optimization asdata is added. We note that the benefit of CKM Wusing weight learning becomes apparent with 200training instances. This learning curve suggests that CKMs would be well suited for applications incluttered environments with many 3D transformations (e.g., loop closure).50 200 800 3200 12800Training Instances 0% 25% 50% 75%100%AccuracyCKMwCKMSVM with convnet featuresConvnetk-NNFigure 3: Number of training instances versus accuracy on unseen symmetries in NORB4 C ONCLUSIONThis paper proposed compositional kernel machines, an instance-based method for object recog-nition that addresses some of the weaknesses of deep architectures and other kernel methods. Weshowed how using a sum-product function to represent a discriminant function leads to tractablesummation over the weighted kernels to an exponential set of virtual instances, which can mitigatethe curse of dimensionality and improve sample complexity. We proposed a method to discrimina-tively learn weights on individual instance elements and showed that this improves upon uniformweighting. Finally, we presented results in several scenarios showing that CKMs are a significantimprovement for IBL and show promise compared with deep methods.Future research directions include developing other architectures and learning procedures for CKMs,integrating symmetry transformations into the architecture through kernels and cost functions, andapplying CKMs to structured prediction, regression, and reinforcement learning problems. CKMsexhibit a reversed trade-off of fast learning speed and large model size compared to neural networks.Given that animals can benefit from both trade-offs, these results may inspire computational theoriesof different brain structures, especially the neocortex versus the cerebellum (Ito, 2012).ACKNOWLEDGMENTSThe authors are grateful to John Platt for helpful discussions and feedback. This research was partlysupported by ONR grant N00014-16-1-2697, AFRL contract FA8750-13-2-0019, a Google PhDFellowship, an AWS in Education Grant, and an NVIDIA academic hardware grant. The views andconclusions contained in this document are those of the authors and should not be interpreted asnecessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.8Under review as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American MathematicalSociety , 68(3):337–404, 1950.Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. Large-Scale KernelMachines , 34(5), 2007.Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image clas-sification. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 1992–1999. IEEE, 2008.Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing multipleparameters for support vector machines. Machine Learning , 46(1-3):131–159, 2002.Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-basedvector machines. Journal of Machine Learning Research , 2(Dec):265–292, 2001.Richard O Duda, Peter E Hart, and David G Stork. Pattern Classification . John Wiley & Sons, 2000.Abram L Friesen and Pedro Domingos. The sum-product theorem: A foundation for learningtractable models. In Proceedings of the 33rd International Conference on Machine Learning ,2016.King Sun Fu. Syntactic Methods in Pattern Recognition , volume 112. Elsevier, 1974.Jae-Pil Heo, Youngwoon Lee, Junfeng He, Shih-Fu Chang, and Sung-Eui Yoon. Spherical hashing.InComputer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2957–2964. IEEE,2012.Masao Ito. The Cerebellum: Brain for an Implicit Self . FT press, 2012.Yann LeCun, Fu Jie Huang, and L ́eon Bottou. Learning methods for generic object recognitionwith invariance to pose and lighting. In Computer Vision and Pattern Recognition (CVPR), IEEEConference on , volume 2, pp. 97–104. IEEE, 2004.Mary S Lee and AW Moore. Efficient algorithms for minimizing cross validation error. In Pro-ceedings of the 8th International Conference on Machine Learning , pp. 190. Morgan Kaufmann,1994.Aleksandr Luntz and Viktor Brailovsky. On estimation of characters obtained in statistical procedureof recognition. Technicheskaya Kibernetica , 3(6):6–12, 1969.Marius Muja and David G Lowe. Fast approximate nearest neighbors with automatic algorithm con-figuration. In International Conference on Computer Vision Theory and Application (VISSAPP) ,pp. 331–340, 2009.Fabian Pedregosa, Ga ̈el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, OlivierGrisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincnet Dubourg, Jake Vanderplas,Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ́Edouard Duch-esnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011.9Under review as a conference paper at ICLR 2017John C Platt and Timothy P Allen. A neural network classifier for the I1000 OCR chip. In Advancesin Neural Information Processing Systems 9 , pp. 938–944, 1996.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative toSIFT or SURF. In 2011 International Conference on Computer Vision , pp. 2564–2571. IEEE,2011.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep Boltzmann machines. In Proceedings of the12th Conference on Artificial Intelligence and Statistics (AISTATS) , pp. 448–455. Society forArtificial Intelligence and Statistics, 2009.Bernhard Sch ̈olkopf, Chris Burges, and Vladimir Vapnik. Incorporating invariances in support vec-tor learning machines. In Artificial Neural Networks (ICANN) , pp. 47–52. Springer, 1996.Patrice Simard, Yann LeCun, and John S Denker. Efficient pattern recognition using a new transfor-mation distance. In Advances in Neural Information Processing Systems 5 , 1992.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recogni-tion. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2269–2276.IEEE, 2008.10
SknsKydBx
S1Bm3T_lg
ICLR.cc/2017/conference/-/paper65/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a new learning model \"Compositional Kernel Machines (CKMs)\" that extends the classic kernel machines by constructing compositional kernel functions using sum-product networks. This paper considers the convnets as nicely learned nonlinear decision functions and resort their success in classification to their compositional nature. This perspective motivates the design of compositional kernel functions and the sum-product implementation is indeed interesting. I agree the composition is important for convnets, but it is not the whole story of convnets' success. One essential difference between convnets and CKMs is that all the kernels in convnets are learned directly from data while CKMs still build on top of feature descriptors. This, I believe, limits the representation power of CKMs. A recent paper \"Deep Convolutional Networks are Hierarchical Kernel Machines\" by Anselmi, F. et al. seems to be interesting to the authors.\nExperiments seem to be preliminary in this paper. It's good to see promising results of CKMs on small NORB, but it is quite important to show competitive results on recent classification standard benchmarks, such as MNIST, CIFAR10/100 and even Imagenet, in order to establish a novel learning model. In NORB compositions, CKMs seem to be better than convnets at classifying images by their dominant objects. I suspect it is because the use of sparse ORB features. It will be great if this paper could show the accuracy of ORB features with matching kernel SVMs. Some details about this experiment need further clarification, such as what are the high and low probabilities of sampling from each collections and how many images are generated. In NORB Symmetries, CKMs show better performance than convnets with small data, but the convnets seem not converged yet. Could it be possible to show results with larger dataset?", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Compositional Kernel Machines
["Robert Gens", "Pedro Domingos"]
Convolutional neural networks (convnets) have achieved impressive results on recent computer vision benchmarks. While they benefit from multiple layers that encode nonlinear decision boundaries and a degree of translation invariance, training convnets is a lengthy procedure fraught with local optima. Alternatively, a kernel method that incorporates the compositionality and symmetry of convnets could learn similar nonlinear concepts yet with easier training and architecture selection. We propose compositional kernel machines (CKMs), which effectively create an exponential number of virtual training instances by composing transformed sub-regions of the original ones. Despite this, CKM discriminant functions can be computed efficiently using ideas from sum-product networks. The ability to compose virtual instances in this way gives CKMs invariance to translations and other symmetries, and combats the curse of dimensionality. Just as support vector machines (SVMs) provided a compelling alternative to multilayer perceptrons when they were introduced, CKMs could become an attractive approach for object recognition and other vision problems. In this paper we define CKMs, explore their properties, and present promising results on NORB datasets. Experiments show that CKMs can outperform SVMs and be competitive with convnets in a number of dimensions, by learning symmetries and compositional concepts from fewer samples without data augmentation.
["Computer vision", "Supervised Learning"]
https://openreview.net/forum?id=S1Bm3T_lg
https://openreview.net/pdf?id=S1Bm3T_lg
https://openreview.net/forum?id=S1Bm3T_lg&noteId=SknsKydBx
Under review as a conference paper at ICLR 2017COMPOSITIONAL KERNEL MACHINESRobert Gens & Pedro DomingosDepartment of Computer Science & EngineeringUniversity of WashingtonSeattle, WA 98195, USAfrcg,pedrodg@cs.washington.eduABSTRACTConvolutional neural networks (convnets) have achieved impressive results on re-cent computer vision benchmarks. While they benefit from multiple layers that en-code nonlinear decision boundaries and a degree of translation invariance, trainingconvnets is a lengthy procedure fraught with local optima. Alternatively, a kernelmethod that incorporates the compositionality and symmetry of convnets couldlearn similar nonlinear concepts yet with easier training and architecture selec-tion. We propose compositional kernel machines (CKMs), which effectively cre-ate an exponential number of virtual training instances by composing transformedsub-regions of the original ones. Despite this, CKM discriminant functions canbe computed efficiently using ideas from sum-product networks. The ability tocompose virtual instances in this way gives CKMs invariance to translations andother symmetries, and combats the curse of dimensionality. Just as support vec-tor machines (SVMs) provided a compelling alternative to multilayer perceptronswhen they were introduced, CKMs could become an attractive approach for objectrecognition and other vision problems. In this paper we define CKMs, exploretheir properties, and present promising results on NORB datasets. Experimentsshow that CKMs can outperform SVMs and be competitive with convnets in anumber of dimensions, by learning symmetries and compositional concepts fromfewer samples without data augmentation.1 I NTRODUCTIONThe depth of state-of-the-art convnets is a double-edged sword: it yields both nonlinearity for so-phisticated discrimination and nonconvexity for frustrating optimization. The established trainingprocedure for ILSVRC classification cycles through the million-image training set more than fiftytimes, requiring substantial stochasticity, data augmentation, and hand-tuned learning rates. On to-day’s consumer hardware, the process takes several days. However, performance depends heavilyon hyperparameters, which include the number and connections of neurons as well as optimizationdetails. Unfortunately, the space of hyperparameters is unbounded, and each configuration of hyper-parameters requires the aforementioned training procedure. It is no surprise that large organizationswith enough computational power to conduct this search dominate this task.Yet mastery of object recognition on a static dataset is not enough to propel robotics and internet-scale applications with ever-growing instances and categories. Each time the training set is modified,the convnet must be retrained (“fine-tuned”) for optimum performance. If the training set growslinearly with time, the total training computation grows quadratically.We propose the Compositional Kernel Machine (CKM), a kernel-based visual classifier that has thesymmetry and compositionality of convnets but with the training benefits of instance-based learning(IBL). CKMs branch from the original instance-based methods with virtual instances , an exponen-tial set of plausible compositions of training instances. The first steps in this direction are promisingcompared to IBL and deep methods, and future work will benefit from over fifty years of researchinto nearest neighbor algorithms, kernel methods, and neural networks.In this paper we first define CKMs, explore their formal and computational properties, and comparethem to existing kernel methods. We then propose a key contribution of this work: a sum-productfunction (SPF) that efficiently sums over an exponential number of virtual instances. We then de-1Under review as a conference paper at ICLR 2017scribe how to train the CKM with and without parameter optimization. Finally, we present resultson NORB and variants that show a CKM trained on a CPU can be competitive with convnets trainedfor much longer on a GPU and can outperform them on tests of composition and symmetry, as wellas markedly improving over previous IBL methods.2 C OMPOSITIONAL KERNEL MACHINESThe key issue in using an instance-based learner on large images is the curse of dimensionality. Evenmillions of training images are not enough to construct a meaningful neighborhood for a 256256pixel image. The compositional kernel machine (CKM) addresses this issue by constructing an ex-ponential number of virtual instances . The core hypothesis is that a variation of the visual world canbe understood as a rearrangement of low-dimensional pieces that have been seen before. For exam-ple, an image of a house could be recognized by matching many pieces from other images of housesfrom different viewpoints. The virtual instances represent this set of all possible transformationsand recombinations of the training images. The arrangement of these pieces cannot be arbitrary, soCKMs learn how to compose virtual instances with weights on compositions. A major contributionof this work is the ability to efficiently sum over this set with a sum-product function.The set of virtual instances is related to the nonlinear image manifolds described by Simard et al.(1992) but with key differences. Whereas the tangent distance accounts for transformations appliedto the whole image, virtual instances can depict local transformations that are applied differentlyacross an image. Secondly, the tangent plane approximation of the image manifold is only accuratenear the training images. Virtual instances can easily represent distant transformations. Unlike theexplicit augmentation of virtual support vectors in Sch ̈olkopf et al. (1996), the set of virtual instancesin a CKM is implicit and exponentially larger. Platt & Allen (1996) demonstrated an early versionof virtual instances to expand the set of negative examples for a linear classifier.2.1 D EFINITIONWe define CKMs using notation common to other IBL techniques. The two prototypical instance-based learners are k-nearest neighbors and support vector machines. The foundation for both algo-rithms is a similarity or kernel function K(x;x0)between two instances. Given a training set of mlabeled instances of the form hxi;yiiand queryxq, thek-NN algorithm outputs the most commonlabel of theknearest instances:ykNN(xq) = arg maxcmXi=11c=yi^K(xi;xq)K(xk;xq)where 1[]equals one if its argument is true and zero otherwise, and xkis thekthnearest traininginstance to query xqassuming unique distances. The multiclass support vector machine (Crammer& Singer, 2001) in its dual form can be seen as a weighted nearest neighbor that outputs the classwith the highest weighted sum of kernel values with the query:ySVM(xq) = arg maxcmXi=1i;cK(xi;xq) (1)wherei;cis the weight on training instance xithat contributes to the score of class c.The CKM performs the same classification as these instance-based methods but it sums over an ex-ponentially larger set of virtual instances to mitigate the curse of dimensionality. Virtual instancesare composed of rearranged elements from one or more training instances. Depending on the de-sign of the CKM, elements can be subsets of instance variables (e.g., overlapping pixel patches) orfeatures thereof (e.g., ORB features or a 2D grid of convnet feature vectors). We assume there is adeterministic procedure that processes each training or test instance xiinto a fixed tuple of indexedelementsExi= (ei;1; :::; e i;jExij), where instances may have different numbers of elements. Thequery instance xq(with tuple of elements Exq) is the example that is being classified by the CKM;it is a training instance during training and a test instance during testing. A virtual instance zisrepresented by a tuple of elements from training instances, e.g. Ez= (e10;5; e71;2; :::; e 46;17).Given a query instance xq, the CKM represents a set of virtual instances each with the same numberof elements as Exq. We define a leaf kernel KL(ei;j;ei0;j0)that measures the similarity between anytwo elements. Using kernel composition (Aronszajn, 1950), we define the kernel between the queryinstancexqand a virtual instance zas the product of leaf kernels over their corresponding elements:K(z;xq) =QjExqjjKL(ez;j;eq;j).2Under review as a conference paper at ICLR 2017We combine leaf kernels with weighted sums and products to compactly represent a sum over kernelswith an exponential number of virtual instances. Just as a sum-product network can compactly rep-resent a mixture model that is a weighted sum over an exponential number of mixture components,the same algebraic decomposition can compactly encode a weighted sum over an exponential num-ber of kernels. For example, if the query instance is represented by two elements Exq= (eq;1; eq;2)and the training set contains elements fe1; e2; e3; e4; e5; e6g, then[w1KL(eq;1;e1) +w2KL(eq;1;e2) +w3KL(eq;1;e3)][w4KL(eq;2;e4) +w5KL(eq;2;e5) +w6KL(eq;2;e6)]expresses a weighted sum over nine virtual instances using eleven additions/multiplications in-stead of twenty-six for an expanded flat sum w1KL(eq;1;e1)KL(eq;2;e4) +:::+w9KL(eq;1;e3)KL(eq;2;e6). If the query instance and training set contained 100 and 10000 elements, respectively,then a similar factorization would use O(106)operations compared to a na ̈ıve sum over 10500virtualinstances. Leveraging the Sum-Product Theorem (Friesen & Domingos, 2016), we define CKMs toallow for more expressive architectures with this exponential computational savings.Definition 1. A compositional kernel machine (CKM) is defined recursively.1. A leaf kernel over a query element and a training set element is a CKM.2. A product of CKMs with disjoint scopes is a CKM.3. A weighted sum of CKMs with the same scope is a CKM.The scope of an operator is the set of query elements it takes as inputs; it is analogous to the receptivefield of a unit in a neural network, but with CKMs the query elements are not restricted to beingpixels on the image grid (e.g., they may be defined as a set of extracted image features). A leafkernel has singleton scope, internal nodes have scope over some subset of the query elements, andthe root node of the CKM has full scope of all query elements Exq. This definition allows forrich CKM architectures with many layers to represent elaborate compositions. The value of eachsum node child is multiplied by a weight wk;cand optionally a constant cost function (ei;j;ei0;j0)that rewards certain compositions of elements. Analogous to a multiclass SVM, the CKM has aseparate set of weights for each class cin the dataset. The CKM classifies a query instance asyCKM(xq) = arg maxcSc(xq), whereSc(xq)is the value of the root node of the CKM evaluatingquery instance xqusing weights for class c.Definition 2 (Friesen & Domingos (2016)) .A product node is decomposable iff the scopes of itschildren are disjoint. An SPF is decomposable iff all of its product nodes are decomposable.Theorem 1 (Sum-Product Theorem, Friesen & Domingos (2016)) .Every decomposable SPF canbe summed over its domain in time linear in its size.Corollary 1. Sc(xq)can sum over the set of virtual instances in time linear in the size of the SPF .Proof. For each query instance element eq;jwe define a discrete variable Zjwith a state for eachtraining element ei0;j0found in a leaf kernel KL(eq;j;ei0;j0)in the CKM. The Cartesian product ofthe domains of the variables Zdefines the set of virtual instances represented by the CKM. Sc(xq)is a SPF over semiring (R;;;0;1), variablesZ, constant functions wand, and univariatefunctionsKL(eq;j;Zj). With the appropriate definition of leaf kernels, any semiring can be used.The definition above provides that the children of every product node have disjoint scopes. Constantfunctions have empty scope so there is no intersection with scopes of other children. With all productnodes decomposable, Sc(xq)is a decomposable SPF and can therefore sum over all states of Z, thevirtual instances, in time linear to the size of the CKM.Special cases of CKMs include multiclass SVMs (flat sum-of-products) and naive Bayes nearestneighbor (Boiman et al., 2008) (flat product-of-sums). A CKM can be seen as a generalization ofan image grammar (Fu, 1974) where terminal symbols corresponding to pieces of training imagesare scored with kernels and non-terminal symbols are sum nodes with a production for each childproduct node.The weights and cost functions of the CKM control the weights on the virtual instances. Eachvirtual instance represented by the CKM defines a tree that connects the root to the leaf kernelsover its unique composition of training set elements. If we were to expand the CKM into a flatsum (cf. Equation 1), the weight on a virtual instance would be the product of the weights and costfunctions along the branches of its corresponding tree. These weights are important as they canprevent implausible virtual instances. For example, if we use image patches as the elements andallow all compositions, the set of virtual instances would largely contain nonsense noise patterns. If3Under review as a conference paper at ICLR 2017the elements were pixels, the virtual instances could even contain arbitrary images from classes notpresent in the training set. There are many aspects of composition that can be encoded by the CKM.For example, we can penalize virtual instances that compose training set elements using differentsymmetry group transformations. We could also penalize compositions that juxtapose elements thatdisagree on the contents of their borders. Weights can be learned to establish clusters of elements andreward certain arrangements. In Section 3 we demonstrate one choice of weights and cost functionsin a CKM architecture built from extracted image features.2.2 L EARNINGThe training procedure for a CKM builds an SPF that encodes the virtual instances. There are thentwo options for how to set weights in the model. As with k-NN, the weights in the CKM could be setto uniform. Alternatively, as with SVMs, the weights could be optimized to improve generalizationand reduce model size.For weight learning, we use block-coordinate gradient descent to optimize leave-one-out loss overthe training set. The leave-one-out loss on a training instance xiis the loss on that instance made bythe learner trained on all data except xi. Though it is an almost unbiased estimate of generalizationerror (Luntz & Brailovsky, 1969), it is typically too expensive to compute or optimize with non-IBLmethods (Chapelle et al., 2002). With CKMs, caching the SPFs and efficient data structures makeit feasible to compute exact partial derivatives of the leave-one-out loss over the whole training set.We use a multiclass squared-hinge lossL(xi;yi) = max2641 +Sy0(xi)|{z}Best incorrect classSyi(xi)|{z}True class;03752for the loss on training instance xiwith true label yiand highest-scoring incorrect class y0. Weuse the squared version of the hinge loss as it performs better empirically and prioritizes updatesto element weights that led to larger margin violations. In general, this objective is not convex asit involves the difference of the two discriminant functions which are strictly convex (due to thechoice of semiring and the product of weights on each virtual instance). In the special case of thesum-product semiring and unique weights on virtual instances the objective is convex as is true forL2-SVMs. Convnets also have a non-convex objective, but they require lengthy optimization toperform well. As we show in Section 3, CKMs can achieve high accuracy with uniform weights,which further serves as good initialization for gradient descent.For each epoch, we iterate through the training set, for each training instance xioptimizing the blockof weights on those branches with Exias descendants. We take gradient steps to lower the leave-one-out loss over the rest of the training setPi02([1;m]ni)L(xi0;yi0). We iterate until convergence oran early stopping condition. A component of the gradient of the squared-hinge loss on an instancetakes the form@@wk;cL(xi;yi) =8><>:2(xi;yi)@Sy0(xi)@wk;cif(xi;yi)>0^c=y02(xi;yi)@Syi(xi)@wk;cif(xi;yi)>0^c=yi0 otherwisewhere (xi;yi) = 1 +Sy0(xi)Syi(xi). We compute partial derivatives@Sc(xi)@wk;cwith backprop-agation through the SPF. For efficiency, terms of the gradient can be set to zero and excluded frombackpropagation if the values of corresponding leaf kernels are small enough. This is either exact(e.g., ifis maximization) or an approximation (e.g., if is normal addition).2.3 S CALABILITYCKMs have several scalability advantages over convnets. As mentioned previously, they do notrequire a lengthy training procedure. This makes it much easier to add new instances and categories.Whereas most of the computation to evaluate a single setting of convnet hyperparameters is sunk intraining, CKMs can efficiently race hyperparameters on hold-out data (Lee & Moore, 1994).The evaluation of the CKM depends on the structure of the SPF, the size of the training set, andthe computer architecture. A basic building block of these SPFs is a sum node with a numberof children on the order of magnitude of the training set elements jEj. On a sufficiently parallel4Under review as a conference paper at ICLR 2017Table 1: Dataset propertiesName #Training Exs. - #Testing Exs. Dimensions ClassesSmall NORB 24300-24300 9696 5NORB Compositions 100-1000 256256 2NORB Symmetries f50;100;:::; 12800g-2916 108108 6computer, assuming the size of the training set elements greatly exceeds the dimensionality of theleaf kernel, this sum node will require O(log(jEj))time (the depth of a parallel reduction circuit)andO(jEj)space. Duda et al. (2000) describe a constant time nearest neighbor circuit that relies onprecomputed V oronoi partitions, but this has impractical space requirements in high dimensions. Aswith SVMs, optimization of sparse element weights can greatly reduce model size.On a modest multicore computer, we must resort to using specialized data structures. Hash codescan be used to index raw features or to measure Hamming distance as a proxy to more expensivedistance functions. While they are perhaps the fastest method to accelerate a nearest neighbor search,the most accurate hashing methods involve a training period yet do not necessarily result in highrecall (Torralba et al., 2008; Heo et al., 2012). There are many space-partitioning data structuretrees in the literature, however in practice none are able to offer exact search of nearest neighbors inhigh dimensions in logarithmic time. In our experiments we use hierarchical k-means trees (Muja& Lowe, 2009), which are a good compromise between speed and accuracy.3 E XPERIMENTSWe test CKMs on three image classification scenarios that feature images from either the smallNORB dataset or the NORB jittered-cluttered dataset (LeCun et al., 2004). Both NORB datasetscontain greyscale images of five categories of plastic toys photographed with varied altitudes, az-imuths, and lighting conditions. Table 1 summarizes the datasets. We first describe the SPN archi-tecture and then detail each of the three scenarios.3.1 E XPERIMENTAL ARCHITECTUREIn our experiments the architecture of the SPF Sc(xq)for each query image is based on its uniqueset of extracted ORB features. Like SIFT features, ORB features are rotation-invariant and producea descriptor from intensity differences, but ORB is much faster to compute and thus suitable for realtime applications (Rublee et al., 2011). The elements Exi= (ei;1;:::;e i;jEij)of each image xiareits extracted keypoints, where an element’s feature vector and image position are denoted by ~f(ei;j)and~ p(ei;j)respectively. We use the max-sum semiring ( = max ,= + ) because it is morerobust to noisy virtual instances, yields sparser gradients, is more efficient to compute, and performsbetter empirically compared with the sum-product semiring.The SPFSc(xq)maximizes over variables Z= (Z1;:::;Z jExqj)corresponding to query elementsExqwith states for all possible virtual instances. The SPF contains a unary scope max node forevery variablefZjgthat maximizes over the weighted kernels of all possible training elements E:(Zj) =Lzj2Ewzj;cKL(zj;eq;j). The SPF contains a binary scope max node for all pairsof variablesfZj;Zj0gfor which at least one corresponding query element is within the k-nearestspatial neighbors of the other. These nodes maximize over the weighted kernels of all possiblecombinations of training set elements.(Zj;Zj0) =Mzj2EMzj02Ewzj;cwzj0;c(zj;zj0)KL(zj;eq;j)KL(zj0;eq;j0) (2)This maximizes over all possible pairs of training set elements, weighting the two leaf kernelsby two corresponding element weights and a cost function. We use a leaf kernel for image ele-ments that incorporates both the Hamming distance between their features and the Euclidean dis-tance between their image positions: KL(ei;j;ei0;j0) = max(01dHam(~f(ei;j);~f(ei0;j0));0) +max(2jj(~ p(ei;j);~ p(ei0;j0)jj;3). This rewards training set elements that look like a query instanceelement and appear in a similar location, with thresholds for efficiency. This can represent, for ex-ample, the photographic bias to center foreground objects or a discriminative cue from seeing skyat the top of the image. We use the pairwise cost function (ei;j;ei0;j0) =1[i=i0]4that rewardscombinations of elements from the same source training image. This captures the intuition that5Under review as a conference paper at ICLR 2017compositions sourced from more images are less coherent and more likely to contain nonsense thanthose using fewer. The image is represented as a sum of these unary and binary max nodes. Thescopes of children of the sum are restricted to be disjoint, so the children f(Z1;Z2);(Z2;Z3)gwould be disallowed, for example. This restriction is what allows the SPF to be tractable, and withmultiple sums the SPF has high-treewidth. By comparison, a Markov random field expressing thesedependencies would be intractable. The root max node of the SPF has Psums as children, each ofwhich has its random set of unary and binary scope max node children that cover full scope Z. Weillustrate a simplified version of the SPF architecture in Figure 1. Though this SPF models limitedimage structure, the definition of CKMs allows for more expressive architectures as with SPNs.++query imageKLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|eq,1eq,2eq,3eq,4++++...w1,1wm,|Em|w1,2++++...w1,1wm,|Em|w1,2++++...w1,1w1,1wm,|Em|w1,2w1,1wm,|Em|++++...++++...+...+{Z1}{Z2,Z3}{Z4}Z={Z1,Z2,Z3,Z4}Figure 1: Simplified illustration of the SPF Sc(xq)architecture with max-sum semiring used inexperiments (using ORB features as elements, jExqj100). Red dots depict elements Exqof queryinstancexq. Blue dots show training set elements ei;j2E, duplicated with each query element forclarity. A boxed KLshows the leaf kernel with lines descending to its two element arguments. Thenodes are labeled with their scopes. Weights and cost functions (arguments omitted) appear nexttonodes. Only a subset of the unary and binary scope nodes are drawn. Only two of the Ptop-levelnodes are fully detailed (the children of the second are drawn faded).In the following sections, we refer to two variants CKM andCKM W. The CKM version usesuniform weights wk;c, similar to the basic k-nearest neighbor algorithm. The CKM Wmethod opti-mizes weights wk;cas described in Section 2.2. Both versions restrict weights for class cto be1(identity) for those training elements not in class c. This constraint ensures that method CKM isdiscriminative (as is true with k-NN) and reduces the number of parameters optimized by CKM W.The hyperparameters of ORB feature extraction, leaf kernels, cost function, and optimization werechosen using grid search on a validation set.With our CPU implementation, CKM trains in a single pass of feature extraction and storageat5ms/image, CKM Wtrains in under ten epochs at 90ms/image, and both versions test at80ms/image. The GPU-optimized convnets train at 2ms/image for many epochs and test at1ms/image. Remarkably, CKM on a CPU trains faster than the convnet on a GPU.3.2 S MALL NORBWe use the original train-test separation which measures generalization to new instances of a cate-gory (i.e. tested on toy truck that is different from the toys it was trained on). We show promisingresults in Table 2 comparing CKMs to deep and IBL methods. With improvement over k-NN andSVM, the CKM andCKM Wresults show the benefit of using virtual instances to combat the curseof dimensionality. We note that the CKM variant that does not optimize weights performs nearlyas well as the CKM Wversion that does. Since the test set uses a different set of toys, the use ofuntrained ORB features hurts the performance of the CKM. Convnets have an advantage here be-cause they discriminatively train their lowest level of features and represent richer image structure intheir architecture. To become competitive, future work should improve upon this preliminary CKM6Under review as a conference paper at ICLR 2017Table 2: Accuracy on Small NORBMethod AccuracyConvnet (14 epochs) (Bengio & LeCun, 2007) 94:0%DBM with aug. training (Salakhutdinov & Hinton, 2009) 92:8%CKM W 89:8%Convnet (2 epochs) (Bengio & LeCun, 2007) 89:6%DBM (Salakhutdinov & Hinton, 2009) 89:2%SVM (Gaussian kernel) (Bengio & LeCun, 2007) 88:4%CKM 88:3%k-NN (LeCun et al., 2004) 81:6%Logistic regression (LeCun et al., 2004) 77:5%Table 3: Accuracy on NORB CompositionsMethod Accuracy Train+Test (min)CKM 82:4% 1.5 [CPU]SVM with convnet features 75:0% 1 [GPU+CPU]Convnet 50:6% 9 [GPU]k-NN on image pixels 51:2% 0.2 [CPU]architecture. We demonstrate the advantage of CKMs for representing composition and symmetryin the following experiments.3.3 NORB C OMPOSITIONSA general goal of representation learning is to disentangle the factors of variation of a signal withouthaving to see those factors in all combinations. To evaluate progress towards this, we created imagescontaining three toys each, sourced from the small NORB training set. Small NORB contains tentypes of each toy category (e.g., ten different airplanes), which we divided into two collections. Eachimage is generated by choosing one of the collections uniformly and for each of three categories(person, airplane, animal) randomly sampling a toy from that collection with higher probability(P=56) than from the other collection (i.e., there are two children with disjoint toy collectionsbut they sometimes borrow). The task is to determine which of the two collections generated theimage. This dataset measures whether a method can distinguish different compositions withouthaving seen all possible permutations of those objects through symmetries and noisy intra-classvariation. Analogous tasks include identifying people by their clothing, recognizing social groupsby their members, and classifying cuisines by their ingredients.We compare CKMs to other methods in Table 3. Convnets and their features are computed using theTensorFlow library (Abadi et al., 2015). Training convnets from few images is very difficult withoutresorting to other datasets; we augment the training set with random crops, which still yields testaccuracy near chance. In such situations it is common to train an SVM with features extracted bya convnet trained on a different, larger dataset. We use 2048-dimensional features extracted fromthe penultimate layer of the pre-trained Inception network (Szegedy et al., 2015) and a linear kernelSVM with squared-hinge loss (Pedregosa et al., 2011). Notably, the CKM is much more accuratethan the deep methods, and it is about as fast as the SVM despite not taking advantage of the GPU.Figure 2: Images from NORB Compositions3.4 NORB S YMMETRIESComposition is a useful tool for modeling the symmetries of objects. When we see an image of anobject in a new pose, parts of the image may look similar to parts of images of the object in poses wehave seen before. In this experiment, we partition the training set of NORB jittered-cluttered into a7Under review as a conference paper at ICLR 2017new dataset with 10% withheld for each of validation and testing. Training and testing on the samegroup of toy instances, this measures the ability to generalize to new angles, lighting conditions,backgrounds, and distortions.We vary the amount of training data to plot learning curves in Figure 3. We observe that CKMs arebetter able to generalize to these distortions than other methods, especially with less data. Impor-tantly, the performance of CKM improves with more data, without requiring costly optimization asdata is added. We note that the benefit of CKM Wusing weight learning becomes apparent with 200training instances. This learning curve suggests that CKMs would be well suited for applications incluttered environments with many 3D transformations (e.g., loop closure).50 200 800 3200 12800Training Instances 0% 25% 50% 75%100%AccuracyCKMwCKMSVM with convnet featuresConvnetk-NNFigure 3: Number of training instances versus accuracy on unseen symmetries in NORB4 C ONCLUSIONThis paper proposed compositional kernel machines, an instance-based method for object recog-nition that addresses some of the weaknesses of deep architectures and other kernel methods. Weshowed how using a sum-product function to represent a discriminant function leads to tractablesummation over the weighted kernels to an exponential set of virtual instances, which can mitigatethe curse of dimensionality and improve sample complexity. We proposed a method to discrimina-tively learn weights on individual instance elements and showed that this improves upon uniformweighting. Finally, we presented results in several scenarios showing that CKMs are a significantimprovement for IBL and show promise compared with deep methods.Future research directions include developing other architectures and learning procedures for CKMs,integrating symmetry transformations into the architecture through kernels and cost functions, andapplying CKMs to structured prediction, regression, and reinforcement learning problems. CKMsexhibit a reversed trade-off of fast learning speed and large model size compared to neural networks.Given that animals can benefit from both trade-offs, these results may inspire computational theoriesof different brain structures, especially the neocortex versus the cerebellum (Ito, 2012).ACKNOWLEDGMENTSThe authors are grateful to John Platt for helpful discussions and feedback. This research was partlysupported by ONR grant N00014-16-1-2697, AFRL contract FA8750-13-2-0019, a Google PhDFellowship, an AWS in Education Grant, and an NVIDIA academic hardware grant. The views andconclusions contained in this document are those of the authors and should not be interpreted asnecessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.8Under review as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American MathematicalSociety , 68(3):337–404, 1950.Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. Large-Scale KernelMachines , 34(5), 2007.Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image clas-sification. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 1992–1999. IEEE, 2008.Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing multipleparameters for support vector machines. Machine Learning , 46(1-3):131–159, 2002.Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-basedvector machines. Journal of Machine Learning Research , 2(Dec):265–292, 2001.Richard O Duda, Peter E Hart, and David G Stork. Pattern Classification . John Wiley & Sons, 2000.Abram L Friesen and Pedro Domingos. The sum-product theorem: A foundation for learningtractable models. In Proceedings of the 33rd International Conference on Machine Learning ,2016.King Sun Fu. Syntactic Methods in Pattern Recognition , volume 112. Elsevier, 1974.Jae-Pil Heo, Youngwoon Lee, Junfeng He, Shih-Fu Chang, and Sung-Eui Yoon. Spherical hashing.InComputer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2957–2964. IEEE,2012.Masao Ito. The Cerebellum: Brain for an Implicit Self . FT press, 2012.Yann LeCun, Fu Jie Huang, and L ́eon Bottou. Learning methods for generic object recognitionwith invariance to pose and lighting. In Computer Vision and Pattern Recognition (CVPR), IEEEConference on , volume 2, pp. 97–104. IEEE, 2004.Mary S Lee and AW Moore. Efficient algorithms for minimizing cross validation error. In Pro-ceedings of the 8th International Conference on Machine Learning , pp. 190. Morgan Kaufmann,1994.Aleksandr Luntz and Viktor Brailovsky. On estimation of characters obtained in statistical procedureof recognition. Technicheskaya Kibernetica , 3(6):6–12, 1969.Marius Muja and David G Lowe. Fast approximate nearest neighbors with automatic algorithm con-figuration. In International Conference on Computer Vision Theory and Application (VISSAPP) ,pp. 331–340, 2009.Fabian Pedregosa, Ga ̈el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, OlivierGrisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincnet Dubourg, Jake Vanderplas,Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ́Edouard Duch-esnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011.9Under review as a conference paper at ICLR 2017John C Platt and Timothy P Allen. A neural network classifier for the I1000 OCR chip. In Advancesin Neural Information Processing Systems 9 , pp. 938–944, 1996.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative toSIFT or SURF. In 2011 International Conference on Computer Vision , pp. 2564–2571. IEEE,2011.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep Boltzmann machines. In Proceedings of the12th Conference on Artificial Intelligence and Statistics (AISTATS) , pp. 448–455. Society forArtificial Intelligence and Statistics, 2009.Bernhard Sch ̈olkopf, Chris Burges, and Vladimir Vapnik. Incorporating invariances in support vec-tor learning machines. In Artificial Neural Networks (ICANN) , pp. 47–52. Springer, 1996.Patrice Simard, Yann LeCun, and John S Denker. Efficient pattern recognition using a new transfor-mation distance. In Advances in Neural Information Processing Systems 5 , 1992.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recogni-tion. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2269–2276.IEEE, 2008.10
H1CUmANre
S1Bm3T_lg
ICLR.cc/2017/conference/-/paper65/official/review
{"title": "Interesting but much more detail is needed", "rating": "5: Marginally below acceptance threshold", "review": "The authors propose a method to efficiently augment an SVM variant with many virtual instances, and show promising preliminary results. The paper was an interesting read, with thoughtful methodology, but has partially unsupported and potentially misleading claims.\n\nPros:\n- Thoughtful methodology with sensible design choices\n- Potentially useful for smaller (n < 10000) datasets with a lot of statistical structure\n- Nice connections with sum-product literature\n\nCons:\n- Claims about scalability are very unclear\n- Generally the paper does not succeed in telling a complete story about the properties and applicability of the proposed method.\n- Experiments are very preliminary \n\nThe scalability claims are particularly unclear. The paper repeatedly mentions lack of scalability as a drawback for convnets, but it appears the proposed CKM is less scalable than a standard SVM, yet SVMs often handle much fewer training instances than deep neural networks. It appears the scalability advantages are mostly for training sets with roughly fewer than 10,000 instances -- and even if the method could scale to >> 10,000 training instances, it's unclear whether the predictive accuracy would be competitive with convnets in that domain. Moreover, the idea of doing 10^6 operations simply for creating virtual instances on 10^4 training points and 100 test points is still somewhat daunting. What if we had 10^6 training instances and 10^5 testing instances? Because scalability (in the number of training instances) is one of the biggest drawbacks of using SVMs (e.g. with Gaussian kernels) on modern datasets, the scalability claims in this paper need to be significantly expanded and clarified. On a related note, the suggestion that convnets grow quadratically in computation with additional training instances in the introduction needs to be augmented with more detail, and is potentially misleading. Convnets typically scale linearly with additional training data. \n\nIn general, the paper suffers greatly from a lack of clarity and issues of presentation. As above, the full story is not presented, with critical details often missing. Moreover, it would strengthen the paper to remove broad claims such as \"Just as support vector machines (SVMs) eclipsed multilayer perceptrons in the 1990s, CKMs could become a compelling alternative to convnets with reduced training time and sample complexity\", suggesting that CKMs could eclipse convolutional neural networks, and instead provide more helpful and precise information. Convnets are multilayer perceptrons used in the 1990s (as well as now) and they are not eclipsed by SVMs -- they have different relative advantages. And based on the information presented, broadly advertising scalability over convnets is misleading. Can CKMs scale to datasets with millions of training and test instances? It seems as if the scalability advantages are limited to smaller datasets, and asymptotic scalability could be much worse in general. And even if CKMs could scale to such datasets would they have as good predictive accuracy as convnets on those applications? Being specific and with full disclosure about the precise strengths and limitations of the work would greatly improve this paper.\n\nCKMs may be more robust to adversarial examples than standard convnets, due to the virtual instances. But there are many approaches to make deep nets more robust to adversarial examples. It would be useful to consider and compare to these. The ideas behind CKMs also are not inherently specific to kernel methods. Have you considered looking at using virtual instances in a similar way with deep networks? A full exploration might be its own paper, but the idea is worth at least brief discussion in the text. \n\nA big advantage of SVMs (with Gaussian kernels) over deep neural nets is that one can achieve quite good performance with very little human intervention (design choices). However, CKMs seem to require extensive intervention, in terms of architecture (as with a neural network), and in insuring that the virtual instances are created in a plausible manner for the particular application at hand. It's very unclear in general how one would want to create sensible virtual instances and this topic deserves further consideration. Moreover, unlike SVMs (with for example Gaussian or linear kernels) or standard convolutional networks, which are quite general models, CKMs as applied in this paper seem more like SVMs (or kernel methods) which have been highly tailored to a particular application -- in this case, the NORB dataset. There is certainly nothing wrong with the tailored approach, but it would help to be clear and detailed about where the presented ideas can be applied out of the box, or how one would go about making the relevant design choices for a range of different problems. And indeed, it would be good to avoid the potentially misleading suggestions early in the paper that the proposed method is a general alternative to convnets.\n\nThe experiments give some insights into the advantages of the proposed approach, but are very limited. To get a sense of the properties --the strengths and limitations -- of the proposed method, one needs a greater range of datasets with a much larger range of training and test sizes. The comparisons are also quite limited: why not an SVM with a Gaussian kernel? What about an SVM using convnet features from the dataset at hand (light blue curve in figure 3) -- it should do at least as well as the light blue curve. There are also other works that could be considered which combine some of the advantages of kernel methods with deep networks. Also the claim that the approach helps with the curse of dimensionality is sensible but not particularly explored. It also seems the curse of dimensionality could affect the scalability of creating a useful set of virtual instances. And it's unclear how CKM would work without any ORB features. \n\nEven if the method can (be adapted to) scale to n >> 10000, it's unclear whether it will be more useful than convnets in that domain. Indeed, in the experiments here, convnets essentially match CKMs in performance after 12,000 examples, and would probably perform better than CKMs on larger datasets. We can only speculate because the experiments don't consider larger problems.\n\nThe methodology largely takes inspiration from sum product networks, but its application in the context of a kernel approach is reasonably original, and worthy of exploration. It's reasonable to expect the approach to be significant, but its significance is not demonstrated.\n\nThe quality is high in the sense that the methods and insights are thoughtful, but suffers from broad claims and a lack of full and precise detail.\n\nIn short: I like the paper, but it needs more specific details, and a full disclosure of where the method should be most applicable, and its precise advantages and limitations. Code would be helpful for reproducibility.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Compositional Kernel Machines
["Robert Gens", "Pedro Domingos"]
Convolutional neural networks (convnets) have achieved impressive results on recent computer vision benchmarks. While they benefit from multiple layers that encode nonlinear decision boundaries and a degree of translation invariance, training convnets is a lengthy procedure fraught with local optima. Alternatively, a kernel method that incorporates the compositionality and symmetry of convnets could learn similar nonlinear concepts yet with easier training and architecture selection. We propose compositional kernel machines (CKMs), which effectively create an exponential number of virtual training instances by composing transformed sub-regions of the original ones. Despite this, CKM discriminant functions can be computed efficiently using ideas from sum-product networks. The ability to compose virtual instances in this way gives CKMs invariance to translations and other symmetries, and combats the curse of dimensionality. Just as support vector machines (SVMs) provided a compelling alternative to multilayer perceptrons when they were introduced, CKMs could become an attractive approach for object recognition and other vision problems. In this paper we define CKMs, explore their properties, and present promising results on NORB datasets. Experiments show that CKMs can outperform SVMs and be competitive with convnets in a number of dimensions, by learning symmetries and compositional concepts from fewer samples without data augmentation.
["Computer vision", "Supervised Learning"]
https://openreview.net/forum?id=S1Bm3T_lg
https://openreview.net/pdf?id=S1Bm3T_lg
https://openreview.net/forum?id=S1Bm3T_lg&noteId=H1CUmANre
Under review as a conference paper at ICLR 2017COMPOSITIONAL KERNEL MACHINESRobert Gens & Pedro DomingosDepartment of Computer Science & EngineeringUniversity of WashingtonSeattle, WA 98195, USAfrcg,pedrodg@cs.washington.eduABSTRACTConvolutional neural networks (convnets) have achieved impressive results on re-cent computer vision benchmarks. While they benefit from multiple layers that en-code nonlinear decision boundaries and a degree of translation invariance, trainingconvnets is a lengthy procedure fraught with local optima. Alternatively, a kernelmethod that incorporates the compositionality and symmetry of convnets couldlearn similar nonlinear concepts yet with easier training and architecture selec-tion. We propose compositional kernel machines (CKMs), which effectively cre-ate an exponential number of virtual training instances by composing transformedsub-regions of the original ones. Despite this, CKM discriminant functions canbe computed efficiently using ideas from sum-product networks. The ability tocompose virtual instances in this way gives CKMs invariance to translations andother symmetries, and combats the curse of dimensionality. Just as support vec-tor machines (SVMs) provided a compelling alternative to multilayer perceptronswhen they were introduced, CKMs could become an attractive approach for objectrecognition and other vision problems. In this paper we define CKMs, exploretheir properties, and present promising results on NORB datasets. Experimentsshow that CKMs can outperform SVMs and be competitive with convnets in anumber of dimensions, by learning symmetries and compositional concepts fromfewer samples without data augmentation.1 I NTRODUCTIONThe depth of state-of-the-art convnets is a double-edged sword: it yields both nonlinearity for so-phisticated discrimination and nonconvexity for frustrating optimization. The established trainingprocedure for ILSVRC classification cycles through the million-image training set more than fiftytimes, requiring substantial stochasticity, data augmentation, and hand-tuned learning rates. On to-day’s consumer hardware, the process takes several days. However, performance depends heavilyon hyperparameters, which include the number and connections of neurons as well as optimizationdetails. Unfortunately, the space of hyperparameters is unbounded, and each configuration of hyper-parameters requires the aforementioned training procedure. It is no surprise that large organizationswith enough computational power to conduct this search dominate this task.Yet mastery of object recognition on a static dataset is not enough to propel robotics and internet-scale applications with ever-growing instances and categories. Each time the training set is modified,the convnet must be retrained (“fine-tuned”) for optimum performance. If the training set growslinearly with time, the total training computation grows quadratically.We propose the Compositional Kernel Machine (CKM), a kernel-based visual classifier that has thesymmetry and compositionality of convnets but with the training benefits of instance-based learning(IBL). CKMs branch from the original instance-based methods with virtual instances , an exponen-tial set of plausible compositions of training instances. The first steps in this direction are promisingcompared to IBL and deep methods, and future work will benefit from over fifty years of researchinto nearest neighbor algorithms, kernel methods, and neural networks.In this paper we first define CKMs, explore their formal and computational properties, and comparethem to existing kernel methods. We then propose a key contribution of this work: a sum-productfunction (SPF) that efficiently sums over an exponential number of virtual instances. We then de-1Under review as a conference paper at ICLR 2017scribe how to train the CKM with and without parameter optimization. Finally, we present resultson NORB and variants that show a CKM trained on a CPU can be competitive with convnets trainedfor much longer on a GPU and can outperform them on tests of composition and symmetry, as wellas markedly improving over previous IBL methods.2 C OMPOSITIONAL KERNEL MACHINESThe key issue in using an instance-based learner on large images is the curse of dimensionality. Evenmillions of training images are not enough to construct a meaningful neighborhood for a 256256pixel image. The compositional kernel machine (CKM) addresses this issue by constructing an ex-ponential number of virtual instances . The core hypothesis is that a variation of the visual world canbe understood as a rearrangement of low-dimensional pieces that have been seen before. For exam-ple, an image of a house could be recognized by matching many pieces from other images of housesfrom different viewpoints. The virtual instances represent this set of all possible transformationsand recombinations of the training images. The arrangement of these pieces cannot be arbitrary, soCKMs learn how to compose virtual instances with weights on compositions. A major contributionof this work is the ability to efficiently sum over this set with a sum-product function.The set of virtual instances is related to the nonlinear image manifolds described by Simard et al.(1992) but with key differences. Whereas the tangent distance accounts for transformations appliedto the whole image, virtual instances can depict local transformations that are applied differentlyacross an image. Secondly, the tangent plane approximation of the image manifold is only accuratenear the training images. Virtual instances can easily represent distant transformations. Unlike theexplicit augmentation of virtual support vectors in Sch ̈olkopf et al. (1996), the set of virtual instancesin a CKM is implicit and exponentially larger. Platt & Allen (1996) demonstrated an early versionof virtual instances to expand the set of negative examples for a linear classifier.2.1 D EFINITIONWe define CKMs using notation common to other IBL techniques. The two prototypical instance-based learners are k-nearest neighbors and support vector machines. The foundation for both algo-rithms is a similarity or kernel function K(x;x0)between two instances. Given a training set of mlabeled instances of the form hxi;yiiand queryxq, thek-NN algorithm outputs the most commonlabel of theknearest instances:ykNN(xq) = arg maxcmXi=11c=yi^K(xi;xq)K(xk;xq)where 1[]equals one if its argument is true and zero otherwise, and xkis thekthnearest traininginstance to query xqassuming unique distances. The multiclass support vector machine (Crammer& Singer, 2001) in its dual form can be seen as a weighted nearest neighbor that outputs the classwith the highest weighted sum of kernel values with the query:ySVM(xq) = arg maxcmXi=1i;cK(xi;xq) (1)wherei;cis the weight on training instance xithat contributes to the score of class c.The CKM performs the same classification as these instance-based methods but it sums over an ex-ponentially larger set of virtual instances to mitigate the curse of dimensionality. Virtual instancesare composed of rearranged elements from one or more training instances. Depending on the de-sign of the CKM, elements can be subsets of instance variables (e.g., overlapping pixel patches) orfeatures thereof (e.g., ORB features or a 2D grid of convnet feature vectors). We assume there is adeterministic procedure that processes each training or test instance xiinto a fixed tuple of indexedelementsExi= (ei;1; :::; e i;jExij), where instances may have different numbers of elements. Thequery instance xq(with tuple of elements Exq) is the example that is being classified by the CKM;it is a training instance during training and a test instance during testing. A virtual instance zisrepresented by a tuple of elements from training instances, e.g. Ez= (e10;5; e71;2; :::; e 46;17).Given a query instance xq, the CKM represents a set of virtual instances each with the same numberof elements as Exq. We define a leaf kernel KL(ei;j;ei0;j0)that measures the similarity between anytwo elements. Using kernel composition (Aronszajn, 1950), we define the kernel between the queryinstancexqand a virtual instance zas the product of leaf kernels over their corresponding elements:K(z;xq) =QjExqjjKL(ez;j;eq;j).2Under review as a conference paper at ICLR 2017We combine leaf kernels with weighted sums and products to compactly represent a sum over kernelswith an exponential number of virtual instances. Just as a sum-product network can compactly rep-resent a mixture model that is a weighted sum over an exponential number of mixture components,the same algebraic decomposition can compactly encode a weighted sum over an exponential num-ber of kernels. For example, if the query instance is represented by two elements Exq= (eq;1; eq;2)and the training set contains elements fe1; e2; e3; e4; e5; e6g, then[w1KL(eq;1;e1) +w2KL(eq;1;e2) +w3KL(eq;1;e3)][w4KL(eq;2;e4) +w5KL(eq;2;e5) +w6KL(eq;2;e6)]expresses a weighted sum over nine virtual instances using eleven additions/multiplications in-stead of twenty-six for an expanded flat sum w1KL(eq;1;e1)KL(eq;2;e4) +:::+w9KL(eq;1;e3)KL(eq;2;e6). If the query instance and training set contained 100 and 10000 elements, respectively,then a similar factorization would use O(106)operations compared to a na ̈ıve sum over 10500virtualinstances. Leveraging the Sum-Product Theorem (Friesen & Domingos, 2016), we define CKMs toallow for more expressive architectures with this exponential computational savings.Definition 1. A compositional kernel machine (CKM) is defined recursively.1. A leaf kernel over a query element and a training set element is a CKM.2. A product of CKMs with disjoint scopes is a CKM.3. A weighted sum of CKMs with the same scope is a CKM.The scope of an operator is the set of query elements it takes as inputs; it is analogous to the receptivefield of a unit in a neural network, but with CKMs the query elements are not restricted to beingpixels on the image grid (e.g., they may be defined as a set of extracted image features). A leafkernel has singleton scope, internal nodes have scope over some subset of the query elements, andthe root node of the CKM has full scope of all query elements Exq. This definition allows forrich CKM architectures with many layers to represent elaborate compositions. The value of eachsum node child is multiplied by a weight wk;cand optionally a constant cost function (ei;j;ei0;j0)that rewards certain compositions of elements. Analogous to a multiclass SVM, the CKM has aseparate set of weights for each class cin the dataset. The CKM classifies a query instance asyCKM(xq) = arg maxcSc(xq), whereSc(xq)is the value of the root node of the CKM evaluatingquery instance xqusing weights for class c.Definition 2 (Friesen & Domingos (2016)) .A product node is decomposable iff the scopes of itschildren are disjoint. An SPF is decomposable iff all of its product nodes are decomposable.Theorem 1 (Sum-Product Theorem, Friesen & Domingos (2016)) .Every decomposable SPF canbe summed over its domain in time linear in its size.Corollary 1. Sc(xq)can sum over the set of virtual instances in time linear in the size of the SPF .Proof. For each query instance element eq;jwe define a discrete variable Zjwith a state for eachtraining element ei0;j0found in a leaf kernel KL(eq;j;ei0;j0)in the CKM. The Cartesian product ofthe domains of the variables Zdefines the set of virtual instances represented by the CKM. Sc(xq)is a SPF over semiring (R;;;0;1), variablesZ, constant functions wand, and univariatefunctionsKL(eq;j;Zj). With the appropriate definition of leaf kernels, any semiring can be used.The definition above provides that the children of every product node have disjoint scopes. Constantfunctions have empty scope so there is no intersection with scopes of other children. With all productnodes decomposable, Sc(xq)is a decomposable SPF and can therefore sum over all states of Z, thevirtual instances, in time linear to the size of the CKM.Special cases of CKMs include multiclass SVMs (flat sum-of-products) and naive Bayes nearestneighbor (Boiman et al., 2008) (flat product-of-sums). A CKM can be seen as a generalization ofan image grammar (Fu, 1974) where terminal symbols corresponding to pieces of training imagesare scored with kernels and non-terminal symbols are sum nodes with a production for each childproduct node.The weights and cost functions of the CKM control the weights on the virtual instances. Eachvirtual instance represented by the CKM defines a tree that connects the root to the leaf kernelsover its unique composition of training set elements. If we were to expand the CKM into a flatsum (cf. Equation 1), the weight on a virtual instance would be the product of the weights and costfunctions along the branches of its corresponding tree. These weights are important as they canprevent implausible virtual instances. For example, if we use image patches as the elements andallow all compositions, the set of virtual instances would largely contain nonsense noise patterns. If3Under review as a conference paper at ICLR 2017the elements were pixels, the virtual instances could even contain arbitrary images from classes notpresent in the training set. There are many aspects of composition that can be encoded by the CKM.For example, we can penalize virtual instances that compose training set elements using differentsymmetry group transformations. We could also penalize compositions that juxtapose elements thatdisagree on the contents of their borders. Weights can be learned to establish clusters of elements andreward certain arrangements. In Section 3 we demonstrate one choice of weights and cost functionsin a CKM architecture built from extracted image features.2.2 L EARNINGThe training procedure for a CKM builds an SPF that encodes the virtual instances. There are thentwo options for how to set weights in the model. As with k-NN, the weights in the CKM could be setto uniform. Alternatively, as with SVMs, the weights could be optimized to improve generalizationand reduce model size.For weight learning, we use block-coordinate gradient descent to optimize leave-one-out loss overthe training set. The leave-one-out loss on a training instance xiis the loss on that instance made bythe learner trained on all data except xi. Though it is an almost unbiased estimate of generalizationerror (Luntz & Brailovsky, 1969), it is typically too expensive to compute or optimize with non-IBLmethods (Chapelle et al., 2002). With CKMs, caching the SPFs and efficient data structures makeit feasible to compute exact partial derivatives of the leave-one-out loss over the whole training set.We use a multiclass squared-hinge lossL(xi;yi) = max2641 +Sy0(xi)|{z}Best incorrect classSyi(xi)|{z}True class;03752for the loss on training instance xiwith true label yiand highest-scoring incorrect class y0. Weuse the squared version of the hinge loss as it performs better empirically and prioritizes updatesto element weights that led to larger margin violations. In general, this objective is not convex asit involves the difference of the two discriminant functions which are strictly convex (due to thechoice of semiring and the product of weights on each virtual instance). In the special case of thesum-product semiring and unique weights on virtual instances the objective is convex as is true forL2-SVMs. Convnets also have a non-convex objective, but they require lengthy optimization toperform well. As we show in Section 3, CKMs can achieve high accuracy with uniform weights,which further serves as good initialization for gradient descent.For each epoch, we iterate through the training set, for each training instance xioptimizing the blockof weights on those branches with Exias descendants. We take gradient steps to lower the leave-one-out loss over the rest of the training setPi02([1;m]ni)L(xi0;yi0). We iterate until convergence oran early stopping condition. A component of the gradient of the squared-hinge loss on an instancetakes the form@@wk;cL(xi;yi) =8><>:2(xi;yi)@Sy0(xi)@wk;cif(xi;yi)>0^c=y02(xi;yi)@Syi(xi)@wk;cif(xi;yi)>0^c=yi0 otherwisewhere (xi;yi) = 1 +Sy0(xi)Syi(xi). We compute partial derivatives@Sc(xi)@wk;cwith backprop-agation through the SPF. For efficiency, terms of the gradient can be set to zero and excluded frombackpropagation if the values of corresponding leaf kernels are small enough. This is either exact(e.g., ifis maximization) or an approximation (e.g., if is normal addition).2.3 S CALABILITYCKMs have several scalability advantages over convnets. As mentioned previously, they do notrequire a lengthy training procedure. This makes it much easier to add new instances and categories.Whereas most of the computation to evaluate a single setting of convnet hyperparameters is sunk intraining, CKMs can efficiently race hyperparameters on hold-out data (Lee & Moore, 1994).The evaluation of the CKM depends on the structure of the SPF, the size of the training set, andthe computer architecture. A basic building block of these SPFs is a sum node with a numberof children on the order of magnitude of the training set elements jEj. On a sufficiently parallel4Under review as a conference paper at ICLR 2017Table 1: Dataset propertiesName #Training Exs. - #Testing Exs. Dimensions ClassesSmall NORB 24300-24300 9696 5NORB Compositions 100-1000 256256 2NORB Symmetries f50;100;:::; 12800g-2916 108108 6computer, assuming the size of the training set elements greatly exceeds the dimensionality of theleaf kernel, this sum node will require O(log(jEj))time (the depth of a parallel reduction circuit)andO(jEj)space. Duda et al. (2000) describe a constant time nearest neighbor circuit that relies onprecomputed V oronoi partitions, but this has impractical space requirements in high dimensions. Aswith SVMs, optimization of sparse element weights can greatly reduce model size.On a modest multicore computer, we must resort to using specialized data structures. Hash codescan be used to index raw features or to measure Hamming distance as a proxy to more expensivedistance functions. While they are perhaps the fastest method to accelerate a nearest neighbor search,the most accurate hashing methods involve a training period yet do not necessarily result in highrecall (Torralba et al., 2008; Heo et al., 2012). There are many space-partitioning data structuretrees in the literature, however in practice none are able to offer exact search of nearest neighbors inhigh dimensions in logarithmic time. In our experiments we use hierarchical k-means trees (Muja& Lowe, 2009), which are a good compromise between speed and accuracy.3 E XPERIMENTSWe test CKMs on three image classification scenarios that feature images from either the smallNORB dataset or the NORB jittered-cluttered dataset (LeCun et al., 2004). Both NORB datasetscontain greyscale images of five categories of plastic toys photographed with varied altitudes, az-imuths, and lighting conditions. Table 1 summarizes the datasets. We first describe the SPN archi-tecture and then detail each of the three scenarios.3.1 E XPERIMENTAL ARCHITECTUREIn our experiments the architecture of the SPF Sc(xq)for each query image is based on its uniqueset of extracted ORB features. Like SIFT features, ORB features are rotation-invariant and producea descriptor from intensity differences, but ORB is much faster to compute and thus suitable for realtime applications (Rublee et al., 2011). The elements Exi= (ei;1;:::;e i;jEij)of each image xiareits extracted keypoints, where an element’s feature vector and image position are denoted by ~f(ei;j)and~ p(ei;j)respectively. We use the max-sum semiring ( = max ,= + ) because it is morerobust to noisy virtual instances, yields sparser gradients, is more efficient to compute, and performsbetter empirically compared with the sum-product semiring.The SPFSc(xq)maximizes over variables Z= (Z1;:::;Z jExqj)corresponding to query elementsExqwith states for all possible virtual instances. The SPF contains a unary scope max node forevery variablefZjgthat maximizes over the weighted kernels of all possible training elements E:(Zj) =Lzj2Ewzj;cKL(zj;eq;j). The SPF contains a binary scope max node for all pairsof variablesfZj;Zj0gfor which at least one corresponding query element is within the k-nearestspatial neighbors of the other. These nodes maximize over the weighted kernels of all possiblecombinations of training set elements.(Zj;Zj0) =Mzj2EMzj02Ewzj;cwzj0;c(zj;zj0)KL(zj;eq;j)KL(zj0;eq;j0) (2)This maximizes over all possible pairs of training set elements, weighting the two leaf kernelsby two corresponding element weights and a cost function. We use a leaf kernel for image ele-ments that incorporates both the Hamming distance between their features and the Euclidean dis-tance between their image positions: KL(ei;j;ei0;j0) = max(01dHam(~f(ei;j);~f(ei0;j0));0) +max(2jj(~ p(ei;j);~ p(ei0;j0)jj;3). This rewards training set elements that look like a query instanceelement and appear in a similar location, with thresholds for efficiency. This can represent, for ex-ample, the photographic bias to center foreground objects or a discriminative cue from seeing skyat the top of the image. We use the pairwise cost function (ei;j;ei0;j0) =1[i=i0]4that rewardscombinations of elements from the same source training image. This captures the intuition that5Under review as a conference paper at ICLR 2017compositions sourced from more images are less coherent and more likely to contain nonsense thanthose using fewer. The image is represented as a sum of these unary and binary max nodes. Thescopes of children of the sum are restricted to be disjoint, so the children f(Z1;Z2);(Z2;Z3)gwould be disallowed, for example. This restriction is what allows the SPF to be tractable, and withmultiple sums the SPF has high-treewidth. By comparison, a Markov random field expressing thesedependencies would be intractable. The root max node of the SPF has Psums as children, each ofwhich has its random set of unary and binary scope max node children that cover full scope Z. Weillustrate a simplified version of the SPF architecture in Figure 1. Though this SPF models limitedimage structure, the definition of CKMs allows for more expressive architectures as with SPNs.++query imageKLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|eq,1eq,2eq,3eq,4++++...w1,1wm,|Em|w1,2++++...w1,1wm,|Em|w1,2++++...w1,1w1,1wm,|Em|w1,2w1,1wm,|Em|++++...++++...+...+{Z1}{Z2,Z3}{Z4}Z={Z1,Z2,Z3,Z4}Figure 1: Simplified illustration of the SPF Sc(xq)architecture with max-sum semiring used inexperiments (using ORB features as elements, jExqj100). Red dots depict elements Exqof queryinstancexq. Blue dots show training set elements ei;j2E, duplicated with each query element forclarity. A boxed KLshows the leaf kernel with lines descending to its two element arguments. Thenodes are labeled with their scopes. Weights and cost functions (arguments omitted) appear nexttonodes. Only a subset of the unary and binary scope nodes are drawn. Only two of the Ptop-levelnodes are fully detailed (the children of the second are drawn faded).In the following sections, we refer to two variants CKM andCKM W. The CKM version usesuniform weights wk;c, similar to the basic k-nearest neighbor algorithm. The CKM Wmethod opti-mizes weights wk;cas described in Section 2.2. Both versions restrict weights for class cto be1(identity) for those training elements not in class c. This constraint ensures that method CKM isdiscriminative (as is true with k-NN) and reduces the number of parameters optimized by CKM W.The hyperparameters of ORB feature extraction, leaf kernels, cost function, and optimization werechosen using grid search on a validation set.With our CPU implementation, CKM trains in a single pass of feature extraction and storageat5ms/image, CKM Wtrains in under ten epochs at 90ms/image, and both versions test at80ms/image. The GPU-optimized convnets train at 2ms/image for many epochs and test at1ms/image. Remarkably, CKM on a CPU trains faster than the convnet on a GPU.3.2 S MALL NORBWe use the original train-test separation which measures generalization to new instances of a cate-gory (i.e. tested on toy truck that is different from the toys it was trained on). We show promisingresults in Table 2 comparing CKMs to deep and IBL methods. With improvement over k-NN andSVM, the CKM andCKM Wresults show the benefit of using virtual instances to combat the curseof dimensionality. We note that the CKM variant that does not optimize weights performs nearlyas well as the CKM Wversion that does. Since the test set uses a different set of toys, the use ofuntrained ORB features hurts the performance of the CKM. Convnets have an advantage here be-cause they discriminatively train their lowest level of features and represent richer image structure intheir architecture. To become competitive, future work should improve upon this preliminary CKM6Under review as a conference paper at ICLR 2017Table 2: Accuracy on Small NORBMethod AccuracyConvnet (14 epochs) (Bengio & LeCun, 2007) 94:0%DBM with aug. training (Salakhutdinov & Hinton, 2009) 92:8%CKM W 89:8%Convnet (2 epochs) (Bengio & LeCun, 2007) 89:6%DBM (Salakhutdinov & Hinton, 2009) 89:2%SVM (Gaussian kernel) (Bengio & LeCun, 2007) 88:4%CKM 88:3%k-NN (LeCun et al., 2004) 81:6%Logistic regression (LeCun et al., 2004) 77:5%Table 3: Accuracy on NORB CompositionsMethod Accuracy Train+Test (min)CKM 82:4% 1.5 [CPU]SVM with convnet features 75:0% 1 [GPU+CPU]Convnet 50:6% 9 [GPU]k-NN on image pixels 51:2% 0.2 [CPU]architecture. We demonstrate the advantage of CKMs for representing composition and symmetryin the following experiments.3.3 NORB C OMPOSITIONSA general goal of representation learning is to disentangle the factors of variation of a signal withouthaving to see those factors in all combinations. To evaluate progress towards this, we created imagescontaining three toys each, sourced from the small NORB training set. Small NORB contains tentypes of each toy category (e.g., ten different airplanes), which we divided into two collections. Eachimage is generated by choosing one of the collections uniformly and for each of three categories(person, airplane, animal) randomly sampling a toy from that collection with higher probability(P=56) than from the other collection (i.e., there are two children with disjoint toy collectionsbut they sometimes borrow). The task is to determine which of the two collections generated theimage. This dataset measures whether a method can distinguish different compositions withouthaving seen all possible permutations of those objects through symmetries and noisy intra-classvariation. Analogous tasks include identifying people by their clothing, recognizing social groupsby their members, and classifying cuisines by their ingredients.We compare CKMs to other methods in Table 3. Convnets and their features are computed using theTensorFlow library (Abadi et al., 2015). Training convnets from few images is very difficult withoutresorting to other datasets; we augment the training set with random crops, which still yields testaccuracy near chance. In such situations it is common to train an SVM with features extracted bya convnet trained on a different, larger dataset. We use 2048-dimensional features extracted fromthe penultimate layer of the pre-trained Inception network (Szegedy et al., 2015) and a linear kernelSVM with squared-hinge loss (Pedregosa et al., 2011). Notably, the CKM is much more accuratethan the deep methods, and it is about as fast as the SVM despite not taking advantage of the GPU.Figure 2: Images from NORB Compositions3.4 NORB S YMMETRIESComposition is a useful tool for modeling the symmetries of objects. When we see an image of anobject in a new pose, parts of the image may look similar to parts of images of the object in poses wehave seen before. In this experiment, we partition the training set of NORB jittered-cluttered into a7Under review as a conference paper at ICLR 2017new dataset with 10% withheld for each of validation and testing. Training and testing on the samegroup of toy instances, this measures the ability to generalize to new angles, lighting conditions,backgrounds, and distortions.We vary the amount of training data to plot learning curves in Figure 3. We observe that CKMs arebetter able to generalize to these distortions than other methods, especially with less data. Impor-tantly, the performance of CKM improves with more data, without requiring costly optimization asdata is added. We note that the benefit of CKM Wusing weight learning becomes apparent with 200training instances. This learning curve suggests that CKMs would be well suited for applications incluttered environments with many 3D transformations (e.g., loop closure).50 200 800 3200 12800Training Instances 0% 25% 50% 75%100%AccuracyCKMwCKMSVM with convnet featuresConvnetk-NNFigure 3: Number of training instances versus accuracy on unseen symmetries in NORB4 C ONCLUSIONThis paper proposed compositional kernel machines, an instance-based method for object recog-nition that addresses some of the weaknesses of deep architectures and other kernel methods. Weshowed how using a sum-product function to represent a discriminant function leads to tractablesummation over the weighted kernels to an exponential set of virtual instances, which can mitigatethe curse of dimensionality and improve sample complexity. We proposed a method to discrimina-tively learn weights on individual instance elements and showed that this improves upon uniformweighting. Finally, we presented results in several scenarios showing that CKMs are a significantimprovement for IBL and show promise compared with deep methods.Future research directions include developing other architectures and learning procedures for CKMs,integrating symmetry transformations into the architecture through kernels and cost functions, andapplying CKMs to structured prediction, regression, and reinforcement learning problems. CKMsexhibit a reversed trade-off of fast learning speed and large model size compared to neural networks.Given that animals can benefit from both trade-offs, these results may inspire computational theoriesof different brain structures, especially the neocortex versus the cerebellum (Ito, 2012).ACKNOWLEDGMENTSThe authors are grateful to John Platt for helpful discussions and feedback. This research was partlysupported by ONR grant N00014-16-1-2697, AFRL contract FA8750-13-2-0019, a Google PhDFellowship, an AWS in Education Grant, and an NVIDIA academic hardware grant. The views andconclusions contained in this document are those of the authors and should not be interpreted asnecessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.8Under review as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American MathematicalSociety , 68(3):337–404, 1950.Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. Large-Scale KernelMachines , 34(5), 2007.Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image clas-sification. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 1992–1999. IEEE, 2008.Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing multipleparameters for support vector machines. Machine Learning , 46(1-3):131–159, 2002.Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-basedvector machines. Journal of Machine Learning Research , 2(Dec):265–292, 2001.Richard O Duda, Peter E Hart, and David G Stork. Pattern Classification . John Wiley & Sons, 2000.Abram L Friesen and Pedro Domingos. The sum-product theorem: A foundation for learningtractable models. In Proceedings of the 33rd International Conference on Machine Learning ,2016.King Sun Fu. Syntactic Methods in Pattern Recognition , volume 112. Elsevier, 1974.Jae-Pil Heo, Youngwoon Lee, Junfeng He, Shih-Fu Chang, and Sung-Eui Yoon. Spherical hashing.InComputer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2957–2964. IEEE,2012.Masao Ito. The Cerebellum: Brain for an Implicit Self . FT press, 2012.Yann LeCun, Fu Jie Huang, and L ́eon Bottou. Learning methods for generic object recognitionwith invariance to pose and lighting. In Computer Vision and Pattern Recognition (CVPR), IEEEConference on , volume 2, pp. 97–104. IEEE, 2004.Mary S Lee and AW Moore. Efficient algorithms for minimizing cross validation error. In Pro-ceedings of the 8th International Conference on Machine Learning , pp. 190. Morgan Kaufmann,1994.Aleksandr Luntz and Viktor Brailovsky. On estimation of characters obtained in statistical procedureof recognition. Technicheskaya Kibernetica , 3(6):6–12, 1969.Marius Muja and David G Lowe. Fast approximate nearest neighbors with automatic algorithm con-figuration. In International Conference on Computer Vision Theory and Application (VISSAPP) ,pp. 331–340, 2009.Fabian Pedregosa, Ga ̈el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, OlivierGrisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincnet Dubourg, Jake Vanderplas,Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ́Edouard Duch-esnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011.9Under review as a conference paper at ICLR 2017John C Platt and Timothy P Allen. A neural network classifier for the I1000 OCR chip. In Advancesin Neural Information Processing Systems 9 , pp. 938–944, 1996.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative toSIFT or SURF. In 2011 International Conference on Computer Vision , pp. 2564–2571. IEEE,2011.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep Boltzmann machines. In Proceedings of the12th Conference on Artificial Intelligence and Statistics (AISTATS) , pp. 448–455. Society forArtificial Intelligence and Statistics, 2009.Bernhard Sch ̈olkopf, Chris Burges, and Vladimir Vapnik. Incorporating invariances in support vec-tor learning machines. In Artificial Neural Networks (ICANN) , pp. 47–52. Springer, 1996.Patrice Simard, Yann LeCun, and John S Denker. Efficient pattern recognition using a new transfor-mation distance. In Advances in Neural Information Processing Systems 5 , 1992.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recogni-tion. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2269–2276.IEEE, 2008.10
rJ467DgEx
S1Bm3T_lg
ICLR.cc/2017/conference/-/paper65/official/review
{"title": "interesting idea, but too preliminary", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a new learning framework called \"compositional kernel machines\" (CKM). It combines two ideas: kernel methods and sum-product network (SPN). CKM first defines leaf kernels on elements of the query and training examples, then it defines kernel recursively (similar to sum-product network). This paper has shown that the evaluation CKM can be done efficiently using the same tricks in SPN.\n\nPositive: I think the idea in this paper is interesting. Instance-based learning methods (such as SVM with kernels) have been successful in the past, but have been replaced by deep learning methods (e.g. convnet) in the past few years. This paper investigate an unexplored area of how to combine the ideas from kernel methods and deep networks (SPN in this case). \n\nNegative: Although the idea of this paper is interesting, this paper is clearly very preliminary. In its current form, I simply do not see any advantage of the proposed framework over convnet. I will elaborate below.\n\n1) One of the most important claims of this paper is that CKM is faster to learn than convnet. I am not clear why that is the case. Both CKM and convnet use gradient descent during learning, why would CKM be faster?\n\nAlso during inference, the running time of convnet only depends on its network structure. But for CKM, in addition to the network structure, it also depends on the size of training set. From this perspective, it does not seem CKM is very scalable when the training size is big. That is probably why this paper has to use all kinds of specialized data structures and tricks (even on a fairly simple dataset like NORB)\n\n2) I am having a hard time understanding what the leaf kernel is capturing. For example, if the \"elements\" correspond to raw pixel intensities, a leaf kernel essentially compares the intensity value of a pixel in the query image with that in a training image. But in this case, wouldn't you end up comparing a lot of background pixels across these two images (which does not help with recognition)?\n\nI think it probably helps to explain Sec 3.1 a bit better. In its current form, this part is very dense and hard to understand.\n\n3) It is also not entirely clear to me how you would design the architecture of the sum-product function. The example is Sec 3.1 seems to be fairly arbitrary.\n\n4) The experiment section is probably the weakest part. NORB is a very small and toy-ish dataset by today's standard. Even on this small dataset, the proposed method is only slighly better than SVM (it is not clear whether \"SVM\" in Table 2 is linear SVM or kernel SVM. If it is linear SVM, I suspect the performance of \"SVM\" will be even higher when you use kernel SVM), and far worse than convnet. The proposed method only shows improvement over convnet on synthetic datasets (NORB compositions, NORM symmetries)\n\nOverall, I think this paper has some interesting ideas. But in its current form, it is a bit too preliminary and more work is needed to show its advantage. Having said that, I acknowledge that in the machine learning history, many important ideas seem pre-mature when they were first proposed, and it took time for these ideas to develop. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Compositional Kernel Machines
["Robert Gens", "Pedro Domingos"]
Convolutional neural networks (convnets) have achieved impressive results on recent computer vision benchmarks. While they benefit from multiple layers that encode nonlinear decision boundaries and a degree of translation invariance, training convnets is a lengthy procedure fraught with local optima. Alternatively, a kernel method that incorporates the compositionality and symmetry of convnets could learn similar nonlinear concepts yet with easier training and architecture selection. We propose compositional kernel machines (CKMs), which effectively create an exponential number of virtual training instances by composing transformed sub-regions of the original ones. Despite this, CKM discriminant functions can be computed efficiently using ideas from sum-product networks. The ability to compose virtual instances in this way gives CKMs invariance to translations and other symmetries, and combats the curse of dimensionality. Just as support vector machines (SVMs) provided a compelling alternative to multilayer perceptrons when they were introduced, CKMs could become an attractive approach for object recognition and other vision problems. In this paper we define CKMs, explore their properties, and present promising results on NORB datasets. Experiments show that CKMs can outperform SVMs and be competitive with convnets in a number of dimensions, by learning symmetries and compositional concepts from fewer samples without data augmentation.
["Computer vision", "Supervised Learning"]
https://openreview.net/forum?id=S1Bm3T_lg
https://openreview.net/pdf?id=S1Bm3T_lg
https://openreview.net/forum?id=S1Bm3T_lg&noteId=rJ467DgEx
Under review as a conference paper at ICLR 2017COMPOSITIONAL KERNEL MACHINESRobert Gens & Pedro DomingosDepartment of Computer Science & EngineeringUniversity of WashingtonSeattle, WA 98195, USAfrcg,pedrodg@cs.washington.eduABSTRACTConvolutional neural networks (convnets) have achieved impressive results on re-cent computer vision benchmarks. While they benefit from multiple layers that en-code nonlinear decision boundaries and a degree of translation invariance, trainingconvnets is a lengthy procedure fraught with local optima. Alternatively, a kernelmethod that incorporates the compositionality and symmetry of convnets couldlearn similar nonlinear concepts yet with easier training and architecture selec-tion. We propose compositional kernel machines (CKMs), which effectively cre-ate an exponential number of virtual training instances by composing transformedsub-regions of the original ones. Despite this, CKM discriminant functions canbe computed efficiently using ideas from sum-product networks. The ability tocompose virtual instances in this way gives CKMs invariance to translations andother symmetries, and combats the curse of dimensionality. Just as support vec-tor machines (SVMs) provided a compelling alternative to multilayer perceptronswhen they were introduced, CKMs could become an attractive approach for objectrecognition and other vision problems. In this paper we define CKMs, exploretheir properties, and present promising results on NORB datasets. Experimentsshow that CKMs can outperform SVMs and be competitive with convnets in anumber of dimensions, by learning symmetries and compositional concepts fromfewer samples without data augmentation.1 I NTRODUCTIONThe depth of state-of-the-art convnets is a double-edged sword: it yields both nonlinearity for so-phisticated discrimination and nonconvexity for frustrating optimization. The established trainingprocedure for ILSVRC classification cycles through the million-image training set more than fiftytimes, requiring substantial stochasticity, data augmentation, and hand-tuned learning rates. On to-day’s consumer hardware, the process takes several days. However, performance depends heavilyon hyperparameters, which include the number and connections of neurons as well as optimizationdetails. Unfortunately, the space of hyperparameters is unbounded, and each configuration of hyper-parameters requires the aforementioned training procedure. It is no surprise that large organizationswith enough computational power to conduct this search dominate this task.Yet mastery of object recognition on a static dataset is not enough to propel robotics and internet-scale applications with ever-growing instances and categories. Each time the training set is modified,the convnet must be retrained (“fine-tuned”) for optimum performance. If the training set growslinearly with time, the total training computation grows quadratically.We propose the Compositional Kernel Machine (CKM), a kernel-based visual classifier that has thesymmetry and compositionality of convnets but with the training benefits of instance-based learning(IBL). CKMs branch from the original instance-based methods with virtual instances , an exponen-tial set of plausible compositions of training instances. The first steps in this direction are promisingcompared to IBL and deep methods, and future work will benefit from over fifty years of researchinto nearest neighbor algorithms, kernel methods, and neural networks.In this paper we first define CKMs, explore their formal and computational properties, and comparethem to existing kernel methods. We then propose a key contribution of this work: a sum-productfunction (SPF) that efficiently sums over an exponential number of virtual instances. We then de-1Under review as a conference paper at ICLR 2017scribe how to train the CKM with and without parameter optimization. Finally, we present resultson NORB and variants that show a CKM trained on a CPU can be competitive with convnets trainedfor much longer on a GPU and can outperform them on tests of composition and symmetry, as wellas markedly improving over previous IBL methods.2 C OMPOSITIONAL KERNEL MACHINESThe key issue in using an instance-based learner on large images is the curse of dimensionality. Evenmillions of training images are not enough to construct a meaningful neighborhood for a 256256pixel image. The compositional kernel machine (CKM) addresses this issue by constructing an ex-ponential number of virtual instances . The core hypothesis is that a variation of the visual world canbe understood as a rearrangement of low-dimensional pieces that have been seen before. For exam-ple, an image of a house could be recognized by matching many pieces from other images of housesfrom different viewpoints. The virtual instances represent this set of all possible transformationsand recombinations of the training images. The arrangement of these pieces cannot be arbitrary, soCKMs learn how to compose virtual instances with weights on compositions. A major contributionof this work is the ability to efficiently sum over this set with a sum-product function.The set of virtual instances is related to the nonlinear image manifolds described by Simard et al.(1992) but with key differences. Whereas the tangent distance accounts for transformations appliedto the whole image, virtual instances can depict local transformations that are applied differentlyacross an image. Secondly, the tangent plane approximation of the image manifold is only accuratenear the training images. Virtual instances can easily represent distant transformations. Unlike theexplicit augmentation of virtual support vectors in Sch ̈olkopf et al. (1996), the set of virtual instancesin a CKM is implicit and exponentially larger. Platt & Allen (1996) demonstrated an early versionof virtual instances to expand the set of negative examples for a linear classifier.2.1 D EFINITIONWe define CKMs using notation common to other IBL techniques. The two prototypical instance-based learners are k-nearest neighbors and support vector machines. The foundation for both algo-rithms is a similarity or kernel function K(x;x0)between two instances. Given a training set of mlabeled instances of the form hxi;yiiand queryxq, thek-NN algorithm outputs the most commonlabel of theknearest instances:ykNN(xq) = arg maxcmXi=11c=yi^K(xi;xq)K(xk;xq)where 1[]equals one if its argument is true and zero otherwise, and xkis thekthnearest traininginstance to query xqassuming unique distances. The multiclass support vector machine (Crammer& Singer, 2001) in its dual form can be seen as a weighted nearest neighbor that outputs the classwith the highest weighted sum of kernel values with the query:ySVM(xq) = arg maxcmXi=1i;cK(xi;xq) (1)wherei;cis the weight on training instance xithat contributes to the score of class c.The CKM performs the same classification as these instance-based methods but it sums over an ex-ponentially larger set of virtual instances to mitigate the curse of dimensionality. Virtual instancesare composed of rearranged elements from one or more training instances. Depending on the de-sign of the CKM, elements can be subsets of instance variables (e.g., overlapping pixel patches) orfeatures thereof (e.g., ORB features or a 2D grid of convnet feature vectors). We assume there is adeterministic procedure that processes each training or test instance xiinto a fixed tuple of indexedelementsExi= (ei;1; :::; e i;jExij), where instances may have different numbers of elements. Thequery instance xq(with tuple of elements Exq) is the example that is being classified by the CKM;it is a training instance during training and a test instance during testing. A virtual instance zisrepresented by a tuple of elements from training instances, e.g. Ez= (e10;5; e71;2; :::; e 46;17).Given a query instance xq, the CKM represents a set of virtual instances each with the same numberof elements as Exq. We define a leaf kernel KL(ei;j;ei0;j0)that measures the similarity between anytwo elements. Using kernel composition (Aronszajn, 1950), we define the kernel between the queryinstancexqand a virtual instance zas the product of leaf kernels over their corresponding elements:K(z;xq) =QjExqjjKL(ez;j;eq;j).2Under review as a conference paper at ICLR 2017We combine leaf kernels with weighted sums and products to compactly represent a sum over kernelswith an exponential number of virtual instances. Just as a sum-product network can compactly rep-resent a mixture model that is a weighted sum over an exponential number of mixture components,the same algebraic decomposition can compactly encode a weighted sum over an exponential num-ber of kernels. For example, if the query instance is represented by two elements Exq= (eq;1; eq;2)and the training set contains elements fe1; e2; e3; e4; e5; e6g, then[w1KL(eq;1;e1) +w2KL(eq;1;e2) +w3KL(eq;1;e3)][w4KL(eq;2;e4) +w5KL(eq;2;e5) +w6KL(eq;2;e6)]expresses a weighted sum over nine virtual instances using eleven additions/multiplications in-stead of twenty-six for an expanded flat sum w1KL(eq;1;e1)KL(eq;2;e4) +:::+w9KL(eq;1;e3)KL(eq;2;e6). If the query instance and training set contained 100 and 10000 elements, respectively,then a similar factorization would use O(106)operations compared to a na ̈ıve sum over 10500virtualinstances. Leveraging the Sum-Product Theorem (Friesen & Domingos, 2016), we define CKMs toallow for more expressive architectures with this exponential computational savings.Definition 1. A compositional kernel machine (CKM) is defined recursively.1. A leaf kernel over a query element and a training set element is a CKM.2. A product of CKMs with disjoint scopes is a CKM.3. A weighted sum of CKMs with the same scope is a CKM.The scope of an operator is the set of query elements it takes as inputs; it is analogous to the receptivefield of a unit in a neural network, but with CKMs the query elements are not restricted to beingpixels on the image grid (e.g., they may be defined as a set of extracted image features). A leafkernel has singleton scope, internal nodes have scope over some subset of the query elements, andthe root node of the CKM has full scope of all query elements Exq. This definition allows forrich CKM architectures with many layers to represent elaborate compositions. The value of eachsum node child is multiplied by a weight wk;cand optionally a constant cost function (ei;j;ei0;j0)that rewards certain compositions of elements. Analogous to a multiclass SVM, the CKM has aseparate set of weights for each class cin the dataset. The CKM classifies a query instance asyCKM(xq) = arg maxcSc(xq), whereSc(xq)is the value of the root node of the CKM evaluatingquery instance xqusing weights for class c.Definition 2 (Friesen & Domingos (2016)) .A product node is decomposable iff the scopes of itschildren are disjoint. An SPF is decomposable iff all of its product nodes are decomposable.Theorem 1 (Sum-Product Theorem, Friesen & Domingos (2016)) .Every decomposable SPF canbe summed over its domain in time linear in its size.Corollary 1. Sc(xq)can sum over the set of virtual instances in time linear in the size of the SPF .Proof. For each query instance element eq;jwe define a discrete variable Zjwith a state for eachtraining element ei0;j0found in a leaf kernel KL(eq;j;ei0;j0)in the CKM. The Cartesian product ofthe domains of the variables Zdefines the set of virtual instances represented by the CKM. Sc(xq)is a SPF over semiring (R;;;0;1), variablesZ, constant functions wand, and univariatefunctionsKL(eq;j;Zj). With the appropriate definition of leaf kernels, any semiring can be used.The definition above provides that the children of every product node have disjoint scopes. Constantfunctions have empty scope so there is no intersection with scopes of other children. With all productnodes decomposable, Sc(xq)is a decomposable SPF and can therefore sum over all states of Z, thevirtual instances, in time linear to the size of the CKM.Special cases of CKMs include multiclass SVMs (flat sum-of-products) and naive Bayes nearestneighbor (Boiman et al., 2008) (flat product-of-sums). A CKM can be seen as a generalization ofan image grammar (Fu, 1974) where terminal symbols corresponding to pieces of training imagesare scored with kernels and non-terminal symbols are sum nodes with a production for each childproduct node.The weights and cost functions of the CKM control the weights on the virtual instances. Eachvirtual instance represented by the CKM defines a tree that connects the root to the leaf kernelsover its unique composition of training set elements. If we were to expand the CKM into a flatsum (cf. Equation 1), the weight on a virtual instance would be the product of the weights and costfunctions along the branches of its corresponding tree. These weights are important as they canprevent implausible virtual instances. For example, if we use image patches as the elements andallow all compositions, the set of virtual instances would largely contain nonsense noise patterns. If3Under review as a conference paper at ICLR 2017the elements were pixels, the virtual instances could even contain arbitrary images from classes notpresent in the training set. There are many aspects of composition that can be encoded by the CKM.For example, we can penalize virtual instances that compose training set elements using differentsymmetry group transformations. We could also penalize compositions that juxtapose elements thatdisagree on the contents of their borders. Weights can be learned to establish clusters of elements andreward certain arrangements. In Section 3 we demonstrate one choice of weights and cost functionsin a CKM architecture built from extracted image features.2.2 L EARNINGThe training procedure for a CKM builds an SPF that encodes the virtual instances. There are thentwo options for how to set weights in the model. As with k-NN, the weights in the CKM could be setto uniform. Alternatively, as with SVMs, the weights could be optimized to improve generalizationand reduce model size.For weight learning, we use block-coordinate gradient descent to optimize leave-one-out loss overthe training set. The leave-one-out loss on a training instance xiis the loss on that instance made bythe learner trained on all data except xi. Though it is an almost unbiased estimate of generalizationerror (Luntz & Brailovsky, 1969), it is typically too expensive to compute or optimize with non-IBLmethods (Chapelle et al., 2002). With CKMs, caching the SPFs and efficient data structures makeit feasible to compute exact partial derivatives of the leave-one-out loss over the whole training set.We use a multiclass squared-hinge lossL(xi;yi) = max2641 +Sy0(xi)|{z}Best incorrect classSyi(xi)|{z}True class;03752for the loss on training instance xiwith true label yiand highest-scoring incorrect class y0. Weuse the squared version of the hinge loss as it performs better empirically and prioritizes updatesto element weights that led to larger margin violations. In general, this objective is not convex asit involves the difference of the two discriminant functions which are strictly convex (due to thechoice of semiring and the product of weights on each virtual instance). In the special case of thesum-product semiring and unique weights on virtual instances the objective is convex as is true forL2-SVMs. Convnets also have a non-convex objective, but they require lengthy optimization toperform well. As we show in Section 3, CKMs can achieve high accuracy with uniform weights,which further serves as good initialization for gradient descent.For each epoch, we iterate through the training set, for each training instance xioptimizing the blockof weights on those branches with Exias descendants. We take gradient steps to lower the leave-one-out loss over the rest of the training setPi02([1;m]ni)L(xi0;yi0). We iterate until convergence oran early stopping condition. A component of the gradient of the squared-hinge loss on an instancetakes the form@@wk;cL(xi;yi) =8><>:2(xi;yi)@Sy0(xi)@wk;cif(xi;yi)>0^c=y02(xi;yi)@Syi(xi)@wk;cif(xi;yi)>0^c=yi0 otherwisewhere (xi;yi) = 1 +Sy0(xi)Syi(xi). We compute partial derivatives@Sc(xi)@wk;cwith backprop-agation through the SPF. For efficiency, terms of the gradient can be set to zero and excluded frombackpropagation if the values of corresponding leaf kernels are small enough. This is either exact(e.g., ifis maximization) or an approximation (e.g., if is normal addition).2.3 S CALABILITYCKMs have several scalability advantages over convnets. As mentioned previously, they do notrequire a lengthy training procedure. This makes it much easier to add new instances and categories.Whereas most of the computation to evaluate a single setting of convnet hyperparameters is sunk intraining, CKMs can efficiently race hyperparameters on hold-out data (Lee & Moore, 1994).The evaluation of the CKM depends on the structure of the SPF, the size of the training set, andthe computer architecture. A basic building block of these SPFs is a sum node with a numberof children on the order of magnitude of the training set elements jEj. On a sufficiently parallel4Under review as a conference paper at ICLR 2017Table 1: Dataset propertiesName #Training Exs. - #Testing Exs. Dimensions ClassesSmall NORB 24300-24300 9696 5NORB Compositions 100-1000 256256 2NORB Symmetries f50;100;:::; 12800g-2916 108108 6computer, assuming the size of the training set elements greatly exceeds the dimensionality of theleaf kernel, this sum node will require O(log(jEj))time (the depth of a parallel reduction circuit)andO(jEj)space. Duda et al. (2000) describe a constant time nearest neighbor circuit that relies onprecomputed V oronoi partitions, but this has impractical space requirements in high dimensions. Aswith SVMs, optimization of sparse element weights can greatly reduce model size.On a modest multicore computer, we must resort to using specialized data structures. Hash codescan be used to index raw features or to measure Hamming distance as a proxy to more expensivedistance functions. While they are perhaps the fastest method to accelerate a nearest neighbor search,the most accurate hashing methods involve a training period yet do not necessarily result in highrecall (Torralba et al., 2008; Heo et al., 2012). There are many space-partitioning data structuretrees in the literature, however in practice none are able to offer exact search of nearest neighbors inhigh dimensions in logarithmic time. In our experiments we use hierarchical k-means trees (Muja& Lowe, 2009), which are a good compromise between speed and accuracy.3 E XPERIMENTSWe test CKMs on three image classification scenarios that feature images from either the smallNORB dataset or the NORB jittered-cluttered dataset (LeCun et al., 2004). Both NORB datasetscontain greyscale images of five categories of plastic toys photographed with varied altitudes, az-imuths, and lighting conditions. Table 1 summarizes the datasets. We first describe the SPN archi-tecture and then detail each of the three scenarios.3.1 E XPERIMENTAL ARCHITECTUREIn our experiments the architecture of the SPF Sc(xq)for each query image is based on its uniqueset of extracted ORB features. Like SIFT features, ORB features are rotation-invariant and producea descriptor from intensity differences, but ORB is much faster to compute and thus suitable for realtime applications (Rublee et al., 2011). The elements Exi= (ei;1;:::;e i;jEij)of each image xiareits extracted keypoints, where an element’s feature vector and image position are denoted by ~f(ei;j)and~ p(ei;j)respectively. We use the max-sum semiring ( = max ,= + ) because it is morerobust to noisy virtual instances, yields sparser gradients, is more efficient to compute, and performsbetter empirically compared with the sum-product semiring.The SPFSc(xq)maximizes over variables Z= (Z1;:::;Z jExqj)corresponding to query elementsExqwith states for all possible virtual instances. The SPF contains a unary scope max node forevery variablefZjgthat maximizes over the weighted kernels of all possible training elements E:(Zj) =Lzj2Ewzj;cKL(zj;eq;j). The SPF contains a binary scope max node for all pairsof variablesfZj;Zj0gfor which at least one corresponding query element is within the k-nearestspatial neighbors of the other. These nodes maximize over the weighted kernels of all possiblecombinations of training set elements.(Zj;Zj0) =Mzj2EMzj02Ewzj;cwzj0;c(zj;zj0)KL(zj;eq;j)KL(zj0;eq;j0) (2)This maximizes over all possible pairs of training set elements, weighting the two leaf kernelsby two corresponding element weights and a cost function. We use a leaf kernel for image ele-ments that incorporates both the Hamming distance between their features and the Euclidean dis-tance between their image positions: KL(ei;j;ei0;j0) = max(01dHam(~f(ei;j);~f(ei0;j0));0) +max(2jj(~ p(ei;j);~ p(ei0;j0)jj;3). This rewards training set elements that look like a query instanceelement and appear in a similar location, with thresholds for efficiency. This can represent, for ex-ample, the photographic bias to center foreground objects or a discriminative cue from seeing skyat the top of the image. We use the pairwise cost function (ei;j;ei0;j0) =1[i=i0]4that rewardscombinations of elements from the same source training image. This captures the intuition that5Under review as a conference paper at ICLR 2017compositions sourced from more images are less coherent and more likely to contain nonsense thanthose using fewer. The image is represented as a sum of these unary and binary max nodes. Thescopes of children of the sum are restricted to be disjoint, so the children f(Z1;Z2);(Z2;Z3)gwould be disallowed, for example. This restriction is what allows the SPF to be tractable, and withmultiple sums the SPF has high-treewidth. By comparison, a Markov random field expressing thesedependencies would be intractable. The root max node of the SPF has Psums as children, each ofwhich has its random set of unary and binary scope max node children that cover full scope Z. Weillustrate a simplified version of the SPF architecture in Figure 1. Though this SPF models limitedimage structure, the definition of CKMs allows for more expressive architectures as with SPNs.++query imageKLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|KLKL...KL...e1,1e1,2em,|Em|eq,1eq,2eq,3eq,4++++...w1,1wm,|Em|w1,2++++...w1,1wm,|Em|w1,2++++...w1,1w1,1wm,|Em|w1,2w1,1wm,|Em|++++...++++...+...+{Z1}{Z2,Z3}{Z4}Z={Z1,Z2,Z3,Z4}Figure 1: Simplified illustration of the SPF Sc(xq)architecture with max-sum semiring used inexperiments (using ORB features as elements, jExqj100). Red dots depict elements Exqof queryinstancexq. Blue dots show training set elements ei;j2E, duplicated with each query element forclarity. A boxed KLshows the leaf kernel with lines descending to its two element arguments. Thenodes are labeled with their scopes. Weights and cost functions (arguments omitted) appear nexttonodes. Only a subset of the unary and binary scope nodes are drawn. Only two of the Ptop-levelnodes are fully detailed (the children of the second are drawn faded).In the following sections, we refer to two variants CKM andCKM W. The CKM version usesuniform weights wk;c, similar to the basic k-nearest neighbor algorithm. The CKM Wmethod opti-mizes weights wk;cas described in Section 2.2. Both versions restrict weights for class cto be1(identity) for those training elements not in class c. This constraint ensures that method CKM isdiscriminative (as is true with k-NN) and reduces the number of parameters optimized by CKM W.The hyperparameters of ORB feature extraction, leaf kernels, cost function, and optimization werechosen using grid search on a validation set.With our CPU implementation, CKM trains in a single pass of feature extraction and storageat5ms/image, CKM Wtrains in under ten epochs at 90ms/image, and both versions test at80ms/image. The GPU-optimized convnets train at 2ms/image for many epochs and test at1ms/image. Remarkably, CKM on a CPU trains faster than the convnet on a GPU.3.2 S MALL NORBWe use the original train-test separation which measures generalization to new instances of a cate-gory (i.e. tested on toy truck that is different from the toys it was trained on). We show promisingresults in Table 2 comparing CKMs to deep and IBL methods. With improvement over k-NN andSVM, the CKM andCKM Wresults show the benefit of using virtual instances to combat the curseof dimensionality. We note that the CKM variant that does not optimize weights performs nearlyas well as the CKM Wversion that does. Since the test set uses a different set of toys, the use ofuntrained ORB features hurts the performance of the CKM. Convnets have an advantage here be-cause they discriminatively train their lowest level of features and represent richer image structure intheir architecture. To become competitive, future work should improve upon this preliminary CKM6Under review as a conference paper at ICLR 2017Table 2: Accuracy on Small NORBMethod AccuracyConvnet (14 epochs) (Bengio & LeCun, 2007) 94:0%DBM with aug. training (Salakhutdinov & Hinton, 2009) 92:8%CKM W 89:8%Convnet (2 epochs) (Bengio & LeCun, 2007) 89:6%DBM (Salakhutdinov & Hinton, 2009) 89:2%SVM (Gaussian kernel) (Bengio & LeCun, 2007) 88:4%CKM 88:3%k-NN (LeCun et al., 2004) 81:6%Logistic regression (LeCun et al., 2004) 77:5%Table 3: Accuracy on NORB CompositionsMethod Accuracy Train+Test (min)CKM 82:4% 1.5 [CPU]SVM with convnet features 75:0% 1 [GPU+CPU]Convnet 50:6% 9 [GPU]k-NN on image pixels 51:2% 0.2 [CPU]architecture. We demonstrate the advantage of CKMs for representing composition and symmetryin the following experiments.3.3 NORB C OMPOSITIONSA general goal of representation learning is to disentangle the factors of variation of a signal withouthaving to see those factors in all combinations. To evaluate progress towards this, we created imagescontaining three toys each, sourced from the small NORB training set. Small NORB contains tentypes of each toy category (e.g., ten different airplanes), which we divided into two collections. Eachimage is generated by choosing one of the collections uniformly and for each of three categories(person, airplane, animal) randomly sampling a toy from that collection with higher probability(P=56) than from the other collection (i.e., there are two children with disjoint toy collectionsbut they sometimes borrow). The task is to determine which of the two collections generated theimage. This dataset measures whether a method can distinguish different compositions withouthaving seen all possible permutations of those objects through symmetries and noisy intra-classvariation. Analogous tasks include identifying people by their clothing, recognizing social groupsby their members, and classifying cuisines by their ingredients.We compare CKMs to other methods in Table 3. Convnets and their features are computed using theTensorFlow library (Abadi et al., 2015). Training convnets from few images is very difficult withoutresorting to other datasets; we augment the training set with random crops, which still yields testaccuracy near chance. In such situations it is common to train an SVM with features extracted bya convnet trained on a different, larger dataset. We use 2048-dimensional features extracted fromthe penultimate layer of the pre-trained Inception network (Szegedy et al., 2015) and a linear kernelSVM with squared-hinge loss (Pedregosa et al., 2011). Notably, the CKM is much more accuratethan the deep methods, and it is about as fast as the SVM despite not taking advantage of the GPU.Figure 2: Images from NORB Compositions3.4 NORB S YMMETRIESComposition is a useful tool for modeling the symmetries of objects. When we see an image of anobject in a new pose, parts of the image may look similar to parts of images of the object in poses wehave seen before. In this experiment, we partition the training set of NORB jittered-cluttered into a7Under review as a conference paper at ICLR 2017new dataset with 10% withheld for each of validation and testing. Training and testing on the samegroup of toy instances, this measures the ability to generalize to new angles, lighting conditions,backgrounds, and distortions.We vary the amount of training data to plot learning curves in Figure 3. We observe that CKMs arebetter able to generalize to these distortions than other methods, especially with less data. Impor-tantly, the performance of CKM improves with more data, without requiring costly optimization asdata is added. We note that the benefit of CKM Wusing weight learning becomes apparent with 200training instances. This learning curve suggests that CKMs would be well suited for applications incluttered environments with many 3D transformations (e.g., loop closure).50 200 800 3200 12800Training Instances 0% 25% 50% 75%100%AccuracyCKMwCKMSVM with convnet featuresConvnetk-NNFigure 3: Number of training instances versus accuracy on unseen symmetries in NORB4 C ONCLUSIONThis paper proposed compositional kernel machines, an instance-based method for object recog-nition that addresses some of the weaknesses of deep architectures and other kernel methods. Weshowed how using a sum-product function to represent a discriminant function leads to tractablesummation over the weighted kernels to an exponential set of virtual instances, which can mitigatethe curse of dimensionality and improve sample complexity. We proposed a method to discrimina-tively learn weights on individual instance elements and showed that this improves upon uniformweighting. Finally, we presented results in several scenarios showing that CKMs are a significantimprovement for IBL and show promise compared with deep methods.Future research directions include developing other architectures and learning procedures for CKMs,integrating symmetry transformations into the architecture through kernels and cost functions, andapplying CKMs to structured prediction, regression, and reinforcement learning problems. CKMsexhibit a reversed trade-off of fast learning speed and large model size compared to neural networks.Given that animals can benefit from both trade-offs, these results may inspire computational theoriesof different brain structures, especially the neocortex versus the cerebellum (Ito, 2012).ACKNOWLEDGMENTSThe authors are grateful to John Platt for helpful discussions and feedback. This research was partlysupported by ONR grant N00014-16-1-2697, AFRL contract FA8750-13-2-0019, a Google PhDFellowship, an AWS in Education Grant, and an NVIDIA academic hardware grant. The views andconclusions contained in this document are those of the authors and should not be interpreted asnecessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.8Under review as a conference paper at ICLR 2017REFERENCESMart ́ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, AndrewHarp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, ManjunathKudlur, Josh Levenberg, Dan Man ́e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-cent Vanhoucke, Vijay Vasudevan, Fernanda Vi ́egas, Oriol Vinyals, Pete Warden, Martin Watten-berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learningon heterogeneous systems, 2015. URL http://tensorflow.org/ . Software available fromtensorflow.org.Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American MathematicalSociety , 68(3):337–404, 1950.Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. Large-Scale KernelMachines , 34(5), 2007.Oren Boiman, Eli Shechtman, and Michal Irani. In defense of nearest-neighbor based image clas-sification. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 1992–1999. IEEE, 2008.Olivier Chapelle, Vladimir Vapnik, Olivier Bousquet, and Sayan Mukherjee. Choosing multipleparameters for support vector machines. Machine Learning , 46(1-3):131–159, 2002.Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-basedvector machines. Journal of Machine Learning Research , 2(Dec):265–292, 2001.Richard O Duda, Peter E Hart, and David G Stork. Pattern Classification . John Wiley & Sons, 2000.Abram L Friesen and Pedro Domingos. The sum-product theorem: A foundation for learningtractable models. In Proceedings of the 33rd International Conference on Machine Learning ,2016.King Sun Fu. Syntactic Methods in Pattern Recognition , volume 112. Elsevier, 1974.Jae-Pil Heo, Youngwoon Lee, Junfeng He, Shih-Fu Chang, and Sung-Eui Yoon. Spherical hashing.InComputer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2957–2964. IEEE,2012.Masao Ito. The Cerebellum: Brain for an Implicit Self . FT press, 2012.Yann LeCun, Fu Jie Huang, and L ́eon Bottou. Learning methods for generic object recognitionwith invariance to pose and lighting. In Computer Vision and Pattern Recognition (CVPR), IEEEConference on , volume 2, pp. 97–104. IEEE, 2004.Mary S Lee and AW Moore. Efficient algorithms for minimizing cross validation error. In Pro-ceedings of the 8th International Conference on Machine Learning , pp. 190. Morgan Kaufmann,1994.Aleksandr Luntz and Viktor Brailovsky. On estimation of characters obtained in statistical procedureof recognition. Technicheskaya Kibernetica , 3(6):6–12, 1969.Marius Muja and David G Lowe. Fast approximate nearest neighbors with automatic algorithm con-figuration. In International Conference on Computer Vision Theory and Application (VISSAPP) ,pp. 331–340, 2009.Fabian Pedregosa, Ga ̈el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, OlivierGrisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincnet Dubourg, Jake Vanderplas,Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ́Edouard Duch-esnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011.9Under review as a conference paper at ICLR 2017John C Platt and Timothy P Allen. A neural network classifier for the I1000 OCR chip. In Advancesin Neural Information Processing Systems 9 , pp. 938–944, 1996.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative toSIFT or SURF. In 2011 International Conference on Computer Vision , pp. 2564–2571. IEEE,2011.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep Boltzmann machines. In Proceedings of the12th Conference on Artificial Intelligence and Statistics (AISTATS) , pp. 448–455. Society forArtificial Intelligence and Statistics, 2009.Bernhard Sch ̈olkopf, Chris Burges, and Vladimir Vapnik. Incorporating invariances in support vec-tor learning machines. In Artificial Neural Networks (ICANN) , pp. 47–52. Springer, 1996.Patrice Simard, Yann LeCun, and John S Denker. Efficient pattern recognition using a new transfor-mation distance. In Advances in Neural Information Processing Systems 5 , 1992.Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567 , 2015.Antonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recogni-tion. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on , pp. 2269–2276.IEEE, 2008.10
rkNaLgzVe
BJAFbaolg
ICLR.cc/2017/conference/-/paper589/official/review
{"title": "interesting idea", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper trains a generative model which transforms noise into model samples by a gradual denoising process. It is similar to a generative model based on diffusion. Unlike the diffusion approach:\n- It uses only a small number of denoising steps, and is thus far more computationally efficient.\n- Rather than consisting of a reverse trajectory, the conditional chain for the approximate posterior jumps to q(z(0) | x), and then runs in the same direction as the generative model. This allows the inference chain to behave like a perturbation around the generative model, that pulls it towards the data. (This also seems somewhat related to ladder networks.)\n- There is no tractable variational bound on the log likelihood.\n\nI liked the idea, and found the visual sample quality given a short chain impressive. The inpainting results were particularly nice, since one shot inpainting is not possible under most generative modeling frameworks. It would be much more convincing to have a log likelihood comparison that doesn't depend on Parzen likelihoods.\n\nDetailed comments follow:\n\nSec. 2:\n\"theta(0) the\" -> \"theta(0) be the\"\n\"theta(t) the\" -> \"theta(t) be the\"\n\"what we will be using\" -> \"which we will be doing\"\nI like that you infer q(z^0|x), and then run inference in the same order as the generative chain. This reminds me slightly of ladder networks.\n\"q*. Having learned\" -> \"q*. [paragraph break] Having learned\"\nSec 3.3:\n\"learn to inverse\" -> \"learn to reverse\"\nSec. 4:\n\"For each experiments\" -> \"For each experiment\"\nHow sensitive are your results to infusion rate?\nSec. 5: \"appears to provide more accurate models\" I don't think you showed this -- there's no direct comparison to the Sohl-Dickstein paper.\nFig 4. -- neat!\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning to Generate Samples from Noise through Infusion Training
["Florian Bordes", "Sina Honari", "Pascal Vincent"]
In this work, we investigate a novel training procedure to learn a generative model as the transition operator of a Markov chain, such that, when applied repeatedly on an unstructured random noise sample, it will denoise it into a sample that matches the target distribution from the training set. The novel training procedure to learn this progressive denoising operation involves sampling from a slightly different chain than the model chain used for generation in the absence of a denoising target. In the training chain we infuse information from the training target example that we would like the chains to reach with a high probability. The thus learned transition operator is able to produce quality and varied samples in a small number of steps. Experiments show competitive results compared to the samples generated with a basic Generative Adversarial Net.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=BJAFbaolg
https://openreview.net/pdf?id=BJAFbaolg
https://openreview.net/forum?id=BJAFbaolg&noteId=rkNaLgzVe
Published as a conference paper at ICLR 2017LEARNING TO GENERATE SAMPLES FROM NOISETHROUGH INFUSION TRAININGFlorian Bordes, Sina Honari, Pascal VincentMontreal Institute for Learning Algorithms (MILA)D ́epartement d’Informatique et de Recherche Op ́erationnelleUniversit ́e de Montr ́ealMontr ́eal, Qu ́ebec, Canadaffirstname.lastname@umontreal.ca gABSTRACTIn this work, we investigate a novel training procedure to learn a generative modelas the transition operator of a Markov chain, such that, when applied repeatedly onan unstructured random noise sample, it will denoise it into a sample that matchesthe target distribution from the training set. The novel training procedure to learnthis progressive denoising operation involves sampling from a slightly differentchain than the model chain used for generation in the absence of a denoising tar-get. In the training chain we infuse information from the training target examplethat we would like the chains to reach with a high probability. The thus learnedtransition operator is able to produce quality and varied samples in a small numberof steps. Experiments show competitive results compared to the samples gener-ated with a basic Generative Adversarial Net.1 I NTRODUCTION AND MOTIVATIONTo go beyond the relatively simpler tasks of classification and regression, advancing our ability tolearn good generative models of high-dimensional data appears essential. There are many scenarioswhere one needs to efficiently produce good high-dimensional outputs where output dimensionshave unknown intricate statistical dependencies: from generating realistic images, segmentations,text, speech, keypoint or joint positions, etc..., possibly as an answer to the same, other, or multipleinput modalities. These are typically cases where there is not just one right answer but a variety ofequally valid ones following a non-trivial and unknown distribution. A fundamental ingredient forsuch scenarios is thus the ability to learn a good generative model from data, one from which wecan subsequently efficiently generate varied samples of high quality.Many approaches for learning to generate high dimensional samples have been and are still activelybeing investigated. These approaches can be roughly classified under the following broad categories:Ordered visible dimension sampling (van den Oord et al., 2016; Larochelle & Murray,2011). In this type of auto-regressive approach, output dimensions (or groups of condition-ally independent dimensions) are given an arbitrary fixed ordering, and each is sampledconditionally on the previous sampled ones. This strategy is often implemented using arecurrent network (LSTM or GRU). Desirable properties of this type of strategy are thatthe exact log likelihood can usually be computed tractably, and sampling is exact. Unde-sirable properties follow from the forced ordering, whose arbitrariness feels unsatisfactoryespecially for domains that do not have a natural ordering (e.g. images), and imposes forhigh-dimensional output a long sequential generation that can be slow.Undirected graphical models with multiple layers of latent variables. These make infer-ence, and thus learning, particularly hard and tend to be costly to sample from (Salakhutdi-nov & Hinton, 2009).Directed graphical models trained as variational autoencoders (V AE) (Kingma & Welling,2014; Rezende et al., 2014)Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Published as a conference paper at ICLR 2017Adversarially-trained generative networks. (GAN)(Goodfellow et al., 2014)Stochastic neural networks, i.e. networks with stochastic neurons, trained by an adaptedform of stochastic backpropagationGenerative uses of denoising autoencoders (Vincent et al., 2010) and their generalizationas Generative Stochastic Networks (Alain et al., 2016)Inverting a non-equilibrium thermodynamic slow diffusion process (Sohl-Dickstein et al.,2015)Continuous transformation of a distribution by invertible functions (Dinh et al. (2014), alsoused for variational inference in Rezende & Mohamed (2015))Several of these approaches are based on maximizing an explicit or implicit model log-likelihood ora lower bound of its log-likelihood, but some successful ones are not e.g. GANs. The approach wepropose here is based on the notion of “denoising” and thus takes its root in denoising autoencodersand the GSN type of approaches. It is also highly related to the non-equilibrium thermodynamicsinverse diffusion approach of Sohl-Dickstein et al. (2015). One key aspect that distinguishes thesetypes of methods from others listed above is that sample generation is achieved thanks to a learnedstochastic mapping from input space to input space, rather than from a latent-space to input-space.Specifically, in the present work, we propose to learn to generate high quality samples through aprocess of progressive ,stochastic, denoising , starting from a simple initial “noise” sample generatedin input space from a simple factorial distribution i.e. one that does not take into account anydependency or structure between dimensions. This, in effect, amounts to learning the transitionoperator of a Markov chain operating on input space. Starting from such an initial “noise” input,and repeatedly applying the operator for a small fixed number Tof steps, we aim to obtain a highquality resulting sample, effectively modeling the training data distribution. Our training procedureuses a novel “target-infusion” technique, designed to slightly bias model sampling to move towardsa specific data point during training, and thus provide inputs to denoise which are likely under themodel’s sample generation paths. By contrast with Sohl-Dickstein et al. (2015) which consists ininverting a slow and fixed diffusion process, our infusion chains make a few large jumps and followthe model distribution as the learning progresses.The rest of this paper is structured as follows: Section 2 formally defines the model and trainingprocedure. Section 3 discusses and contrasts our approach with the most related methods fromthe literature. Section 4 presents experiments that validate the approach. Section 5 concludes andproposes future work directions.2 P ROPOSED APPROACH2.1 S ETUPWe are given a finite data set Dcontainingnpoints in Rd, supposed drawn i.i.d from an unknowndistribution q. The data set Dis supposed split into training, validation and test subsets Dtrain,Dvalid,Dtest. We will denote qtrain theempirical distribution associated to the training set, and usexto denote observed samples from the data set. We are interested in learning the parameters of agenerative model pconceived as a Markov Chain from which we can efficiently sample. Note thatwe are interested in learning an operator that will display fast “burn-in” from the initial factorial“noise” distribution, but beyond the initial Tsteps we are not concerned about potential slow mixingor being stuck. We will first describe the sampling procedure used to sample from a trained model,before explaining our training procedure.2.2 G ENERATIVE MODEL SAMPLING PROCEDUREThe generative model pisdefined as the following sampling procedure:Using a simple factorial distribution p(0)(z(0)), draw an initial sample z(0)p(0), wherez(0)2Rd. Sincep(0)is factorial, the dcomponents of z(0)are independent: p0cannotmodel any dependency structure. z(0)can be pictured as essentially unstructured randomnoise.Repeatedly apply Ttimes a stochastic transition operator p(t)(z(t)jz(t1)), yielding a more“denoised” sample z(t)p(t)(z(t)jz(t1)), where all z(t)2Rd.2Published as a conference paper at ICLR 2017Figure 1: The model sampling chain . Each row shows a sample from p(z(0);:::;z(T))for a modelthat has been trained on MNIST digits. We see how the learned Markov transition operator progres-sively denoises an initial unstructured noise sample. We can also see that there remains ambiguity inthe early steps as to what digit this could become. This ambiguity gets resolved only in later steps.Even after a few initial steps, stochasticity could have made a chain move to a different final digitshape.Output z(T)as the final generated sample. Our generative model distri-bution is thus p(z(T)), the marginal associated to joint p(z(0);:::;z(T)) =p(0)(z(0))QTt=1p(t)(z(t)jz(t1)).In summary, samples from model pare generated, starting with an initial sample from a simpledistributionp(0), by taking the Tthsample along Markov chain z(0)!z(1)!z(2)!:::!z(T)whose transition operator is p(t)(z(t)jz(t1)). We will call this chain the model sampling chain .Figure 1 illustrates this sampling procedure using a model (i.e. transition operator) that was trainedon MNIST. Note that we impose no formal requirement that the chain converges to a stationarydistribution, as we simply read-out z(T)as the samples from our model p. The chain also needs notbe time-homogeneous, as highlighted by notation p(t)for the transitions.The set of parameters of modelpcomprise the parameters of p(0)and the parameters of tran-sition operator p(t)(z(t)jz(t1)). For tractability, learnability, and efficient sampling, these dis-tributions will be chosen factorial, i.e. p(0)(z(0)) =Qdi=1p(0)i(z(0)i)andp(t)(z(t)jz(t1)) =Qdi=1p(t)i(z(t)ijz(t1)). Note that the conditional distribution of an individual component i,p(t)i(z(t)ijz(t1))may however be multimodal, e.g. a mixture in which case p(t)(z(t)jz(t1))wouldbe a product of independent mixtures (conditioned on z(t1)), one per dimension. In our exper-iments, we will take the p(t)(z(t)jz(t1))to be simple diagonal Gaussian yielding a Deep LatentGaussian Model (DLGM) as in Rezende et al. (2014).2.3 I NFUSION TRAINING PROCEDUREWe want to train the parameters of model psuch that samples from Dtrain are likely of being gener-ated under the model sampling chain . Let(0)be the parameters of p(0)and let(t)be the parametersofp(t)(z(t)jz(t1)). Note that parameters (t)fort>0can straightforwardly be shared across timesteps, which we will be doing in practice. Having committed to using (conditionally) factorial dis-tributions for our p(0)(z(0))andp(t)(z(t)jz(t1)), that are both easy to learn and cheap to samplefrom, let us first consider the following greedy stagewise procedure. We can easily learn p(0)i(z(0))to model the marginal distribution of each component xiof the input, by training it by gradientdescent on a maximum likelihood objective, i.e.(0)= arg maxExqtrainhlogp(0)(x;)i(1)This gives us a first, very crude unstructured (factorial) model of q.3Published as a conference paper at ICLR 2017Having learned this p(0), we might be tempted to then greedily learn the next stage p(1)ofthe chain in a similar fashion, after drawing samples z(0)p(0)in an attempt to learn to“denoise” the sampled z(0)intox. Yet the corresponding following training objective (1)=arg maxExqtrain;z(0)p(0)logp(1)(xjz(0);)makes no sense: xandz(0)are sampled inde-pendently of each other so z(0)contains no information about x, hencep(1)(xjz(0)) =p(1)(x). Somaximizing this second objective becomes essentially the same as what we did when learning p(0).We would learn nothing more. It is essential, if we hope to learn a useful conditional distributionp(1)(xjz(0))that it be trained on particular z(0)containing some information about x. In otherwords, we should not take our training inputs to be samples from p(0)but from a slightly differentdistribution, biased towards containing some information about x. Let us call it q(0)(z(0)jx). Anatural choice for it, if it were possible, would be to take q(0)(z(0)jx) =p(z(0)jz(T)=x)but thisis an intractable inference, as all intermediate z(t)between z(0)andz(T)are effectively latent statesthat we would need to marginalize over. Using a workaround such as a variational or MCMC ap-proach would be a usual fallback. Instead, let us focus on our initial intent of guiding a progressivestochastic denoising, and think if we can come up with a different way to construct q(0)(z(0)jx)andsimilarly for the next steps q(t)i(~z(t)ij~z(t1);x).Eventually, we expect a sequence of samples from Markov chain pto move from initial “noise”towards a specific example xfrom the training set rather than another one, primarily if a samplealong the chain “resembles” xto some degree. This means that the transition operator should learnto pick up a minor resemblance with an xin order to transition to something likely to be evenmore similar to x. In other words, we expect samples along a chain leading to xto both havehigh probability under the transition operator of the chain p(t)(z(t)jz(t1)),andto have some formof at least partial “resemblance” with xlikely to increase as we progress along the chain. Onehighly inefficient way to emulate such a chain of samples would be, for teach step t, to samplemany candidate samples from the transition operator (a conditionally factorial distribution) until wegenerate one that has some minimal “resemblance” to x(e.g. for a discrete space, this resemblancemeasure could be based on their Hamming distance). A qualitatively similar result can be obtainedat a negligible cost by sampling from a factorial distribution that is very close to the one given by thetransition operator, but very slightly biased towards producing something closer to x. Specifically,we can “infuse” a little of xinto our sample by choosing for each input dimension, whether wesample it from the distribution given for that dimension by the transition operator, or whether, witha small probability, we take the value of that dimension from x. Samples from this biased chain, inwhich we slightly “infuse” x, will provide us with the inputs of our input-target training pairs forthe transition operator. The target part of the training pairs is simply x.2.3.1 T HE INFUSION CHAINFormally we define an infusion chain ez(0)!ez(1)!:::!ez(T1)whose distributionq(ez(0);:::;ez(T1)jx)will be “close” to the sampling chain z(0)!z(1)!z(2)!:::!z(T1)of modelpin the sense that q(t)(~z(t)j~z(t1);x)will be close to p(t)(z(t)jz(t1)), but will at ev-ery step be slightly biased towards generating samples closer to target x, i.e. xgets progres-sively “infused” into the chain. This is achieved by defining q(0)i(ez(0)ijx)as a mixture betweenp(0)i(with a large mixture weight) and xi, a concentrated unimodal distribution around xi, suchas a Gaussian with small variance (with a small mixture weight)1. Formally q(0)i(~z(0)ijx) =(1(t))p(0)i(~z(0)i) +(t)xi(~z(0)i), where 1(t)and(t)are the mixture weights2. Inother words, when sampling a value for ~z(0)ifromq(0)ithere will be a small probability (0)to pick value close to xi(as sampled from xi) rather than sampling the value from p(0)i. Wecall(t)theinfusion rate . We define the transition operator of the infusion chain similarly as:q(t)i(~z(t)ij~z(t1);x) = (1(t))p(t)i(~z(t)ij~ z(t1)) +(t)xi(~z(t)i).1Note thatxidoes not denote a Dirac-Delta but a Gaussian with small sigma.2In all experiments, we use an increasing schedule (t)=(t1)+!with(0)and!constant. This allowsto build our chain such that in the first steps, we give little information about the target and in the last steps wegive more informations about the target. This forces the network to have less confidence (greater incertitude)at the beginning of the chain and more confidence on the convergence point at the end of the chain.4Published as a conference paper at ICLR 2017Figure 2: Training infusion chains, infused with target x= . This figure shows the evolutionof chainq(z(0);:::;z(30)jx)as training on MNIST progresses. Top row is after network randomweight initialization. Second row is after 1 training epochs, third after 2 training epochs, and so on.Each of these images were at a time provided as the input part of the ( input ,target ) training pairs forthe network. The network was trained to denoise all of them into target 3. We see that as trainingprogresses, the model has learned to pick up the cues provided by target infusion, to move towardsthat target. Note also that a single denoising step, even with target infusion, is not sufficient for thenetwork to produce a sharp well identified digit.2.3.2 D ENOISING -BASED INFUSION TRAINING PROCEDUREFor all x2Dtrain:Sample from the infusion chain ~ z= (~z(0);:::; ~z(T1))q(~z(0);:::; ~z(T1)jx).precisely so: ~z0q(0)(~z(0)jx):::~z(t)q(t)(~z(t)j~z(t1);x):::Perform a gradient step so that plearns to “denoise” every ~z(t)intox.(t) (t)+(t)@logp(t)(xj~z(t1);(t))@(t)where(t)is a scalar learning rate.3As illustrated in Figure 2, the distribution of samples from the infusion chain evolves as trainingprogresses, since this chain remains close to the model sampling chain.2.4 S TOCHASTIC LOG LIKELIHOOD ESTIMATIONThe exact log-likelihood of the generative model implied by our model pis intractable. The log-probability of an example xcan however be expressed using proposal distribution qas:logp(x) = log Eq(ezjx)p(~ z;x)q(ezjx)(2)Using Jensen’s inequality we can thus derive the following lower bound:logp(x)Eq(ezjx)[logp(~ z;x)logq(ezjx)] (3)where logp(~ z;x) = logp(0)(~z(0)) +PT1t=1logp(t)(~z(t)j~z(t1))+ logp(T)(xj~z(T1))andlogq(~ zjx) = logq(0)(~z(0)jx) +PT1t=1logq(t)(~z(t)j~z(t1);x).3Since we will be sharing parameters between the p(t), in order for the expected larger error gradients onthe earlier transitions not to dominate the parameter updates over the later transitions we used an increasingschedule(t)=0tTfort2f1;:::;Tg.5Published as a conference paper at ICLR 2017A stochastic estimation can easily be obtained by replacing the expectation by an average using afew samples from q(ezjx). We can thus compute a lower bound estimate of the average log likelihoodover training, validation and test data.Similarly in addition to the lower-bound based on Eq.3 we can use the same few samples fromq(ezjx)to get an importance-sampling estimate of the likelihood based on Eq. 24.2.4.1 L OWER -BOUND -BASED INFUSION TRAINING PROCEDURESince we have derived a lower bound on the likelihood, we can alternatively choose to optimize thisstochastic lower-bound directly during training. This alternative lower-bound based infusion train-ing procedure differs only slightly from the denoising-based infusion training procedure by using~z(t)as a training target at step t(performing a gradient step to increase logp(t)(~z(t)j~z(t1);(t)))whereas denoising training always uses xas its target (performing a gradient step to increaselogp(t)(xj~z(t1);(t))). Note that the same reparametrization trick as used in Variational Auto-encoders (Kingma & Welling, 2014) can be used here to backpropagate through the chain’s Gaussiansampling.3 R ELATIONSHIP TO PREVIOUSLY PROPOSED APPROACHES3.1 M ARKOV CHAIN MONTE CARLO FOR ENERGY -BASED MODELSGenerating samples as a repeated application of a Markov transition operator that operates on inputspace is at the heart of Markov Chain Monte Carlo (MCMC) methods. They allow sampling from anenergy-model, where one can efficiently compute the energy or unnormalized negated log probabil-ity (or density) at any point. The transition operator is then derived from an explicit energy functionsuch that the Markov chain prescribed by a specific MCMC method is guaranteed to converge tothe distribution defined by that energy function, as the equilibrium distribution of the chain. MCMCtechniques have thus been used to obtain samples from the energy model, in the process of learningto adjust its parameters.By contrast here we do not learn an explicit energy function, but rather learn directly a parameterizedtransition operator, and define an implicit model distribution based on the result of running theMarkov chain.3.2 V ARIATIONAL AUTO -ENCODERSVariational auto-encoders (V AE) (Kingma & Welling, 2014; Rezende et al., 2014) also start froman unstructured (independent) noise sample and non-linearly transform this into a distribution thatmatches the training data. One difference with our approach is that the V AE typically maps from alower-dimensional space to the observation space. By contrast we learn a stochastic transition oper-ator from input space to input space that we repeat for Tsteps. Another key difference, is that theV AE learns a complex heavily parameterized approximate posterior proposal qwhereas our infusionbasedqcan be understood as a simple heuristic proposal distribution based on p. Importantly thespecific heuristic we use to infuse xintoqmakes sense precisely because our operator is a map frominput space to input space, and couldn’t be readily applied otherwise. The generative network inRezende et al. (2014) is a Deep Latent Gaussian Model (DLGM) just as ours. But their approximateposteriorqis taken to be factorial, including across all layers of the DLGM, whereas our infusionbasedqinvolves an ordered sampling of the layers, as we sample from q(t)(~z(t)j~z(t1);x).More recent proposals involve sophisticated approaches to sample from better approximate poste-riors, as the work of Salimans et al. (2015) in which Hamiltonian Monte Carlo is combined withvariational inference, which looks very promising, though computationally expensive, and Rezende& Mohamed (2015) that generalizes the use of normalizing flows to obtain a better approximateposterior.4Specifically, the two estimates (lower-bound and IS) start by collecting ksamples from q(ezjx)and com-puting for each the corresponding `= logp(~ z;x)logq(ezjx). The lower-bound estimate is then obtainedby averaging the resulting `1;:::` k, whereas the IS estimate is obtained by taking the logof the averagede`1;:::;e`k(in a numerical stable manner as logsumexp( `1;:::;` k)logk).6Published as a conference paper at ICLR 20173.3 S AMPLING FROM AUTOENCODERS AND GENERATIVE STOCHASTIC NETWORKSEarlier works that propose to directly learn a transition operator resulted from research to turn au-toencoder variants that have a stochastic component, in particular denoising autoencoders (Vincentet al., 2010), into generative models that one can sample from. This development is natural, sincea stochastic auto-encoder isa stochastic transition operator form input space to input space. Gen-erative Stochastic Networks (GSN) (Alain et al., 2016) generalized insights from earlier stochasticautoencoder sampling heuristics (Rifai et al., 2012) into a more formal and general framework.These previous works on generative uses of autoencoders and GSNs attempt to learn a chain whoseequilibrium distribution will fit the training data. Because autoencoders and the chain are typicallystarted from or very close to training data points, they are concerned with the chain mixing quicklybetween modes. By contrast our model chain is always restarted from unstructured noise, and isnot required to reach or even have an equilibrium distribution. Our concern is only what happensduring theT“burn-in” initial steps, and to make sure that it transforms the initial factorial noisedistribution into something that best fits the training data distribution. There are no mixing concernsbeyond those Tinitial steps.A related aspect and limitation of previous denoising autoencoder and GSN approaches is that thesewere mainly “local” around training samples: the stochastic operator explored space starting fromand primarily centered around training examples, and learned based on inputs in these parts of spaceonly. Spurious modes in the generated samples might result from large unexplored parts of spacethat one might encounter while running a long chain.3.4 R EVERSING A DIFFUSION PROCESS IN NON -EQUILIBRIUM THERMODYNAMICSThe approach of Sohl-Dickstein et al. (2015) is probably the closest to the approach we develop here.Both share a similar model sampling chain that starts from unstructured factorial noise. Neitherare concerned about an equilibrium distribution . They are however quite different in several keyaspects: Sohl-Dickstein et al. (2015) proceed to invert an explicit diffusion process that starts froma training set example and very slowly destroys its structure to become this random noise, they thenlearn to reverse this process i.e. an inverse diffusion . To maintain the theoretical argument thattheexact reverse process has the same distributional form (e.g. p(x(t1)jx(t))andp(x(t)jx(t1))both factorial Gaussians), the diffusion has to be infinitesimal by construction, hence the proposedapproaches uses chains with thousands of tiny steps. Instead, our aim is to learn an operator that canyield a high quality sample efficiently using only a small number Tof larger steps. Also our infusiontraining does not posit a fixed a priori diffusion process that we would learn to reverse. And whilethe distribution of diffusion chain samples of Sohl-Dickstein et al. (2015) is fixed and remains thesame all along the training, the distribution of our infusion chain samples closely follow the modelchain as our model learns. Our proposed infusion sampling technique thus adapts to the changinggenerative model distribution as the learning progresses.Drawing on both Sohl-Dickstein et al. (2015) and the walkback procedure introduced for GSN inAlain et al. (2016), a variational variant of the walkback algorithm was investigated by Goyal et al.(2017) at the same time as our work. It can be understood as a different approach to learning aMarkov transition operator, in which a “heating” diffusion operator is seen as a variational approxi-mate posterior to the forward “cooling” sampling operator with the exact same form and parameters,except for a different temperature.4 E XPERIMENTSWe trained models on several datasets with real-valued examples. We used as prior distributionp(0)a factorial Gaussian whose parameters were set to be the mean and variance for each pixelthrough the training set. Similarly, our models for the transition operators are factorial Gaussians.Their mean and elementwise variance is produced as the output of a neural network that receivesthe previous z(t1)as its input, i.e. p(t)(z(t)ijz(t1)) =N(i(z(t1));2i(z(t1)))whereand2are computed as output vectors of a neural network. We trained such a model using our infusiontraining procedure on MNIST (LeCun & Cortes, 1998), Toronto Face Database (Susskind et al.,2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). For all datasets, theonly preprocessing we did was to scale the integer pixel values down to range [0,1]. The network7Published as a conference paper at ICLR 2017Table 3: Inception score (with standard error) of 50 000 samples generated by models trained onCIFAR-10. We use the models in Salimans et al. (2016) as baseline. ’SP’ corresponds to the bestmodel described by Salimans et al. (2016) trained in a semi-supervised fashion. ’-L’ correspondsto the same model after removing the label in the training process (unsupervised way), ’-MBF’corresponds to a supervised training without minibatch features.Model Real data SP -L -MBF Infusion trainingInception score 11.24.12 8.09.07 4.36.06 3.87.03 4.62.06trained on MNIST and TFD is a MLP composed of two fully connected layers with 1200 unitsusing batch-normalization (Ioffe & Szegedy, 2015)5. The network trained on CIFAR-10 is basedon the same generator as the GANs of Salimans et al. (2016), i.e. one fully connected layer followedby three transposed convolutions. CelebA was trained with the previous network where we addedanother transposed convolution. We use rectifier linear units (Glorot et al., 2011) on each layerinside the networks. Each of those networks have two distinct final layers with a number of unitscorresponding to the image size. They use sigmoid outputs, one that predict the mean and the secondthat predict a variance scaled by a scalar (In our case we chose = 0:1) and we add an epsilon= 1e4to avoid an excessively small variance. For each experiment, we trained the networkon 15 steps of denoising with an increasing infusion rate of 1% ( != 0:01;(0)= 0), except onCIFAR-10 where we use an increasing infusion rate of 2% ( != 0:02;(0)= 0) on 20 steps.4.1 N UMERICAL RESULTSSince we can’t compute the exact log-likelihood, the evaluation of our model is not straightforward.However we use the lower bound estimator derived in Section 2.4 to evaluate our model during train-ing and prevent overfitting (see Figure 3). Since most previous published results on non-likelihoodbased models (such as GANs) used a Parzen-window-based estimator (Breuleux et al., 2011), we useit as our first comparison tool, even if it can be misleading (Lucas Theis & Bethge, 2016). Resultsare shown in Table 1, we use 10 000 generated samples and = 0:17. To get a better estimate ofthe log-likelihood, we then computed both the stochastic lower bound and the importance samplingestimate (IS) given in Section 2.4. For the IS estimate in our MNIST-trained model, we used 20000 intermediates samples. In Table 2 we compare our model with the recent Annealed ImportanceSampling results (Wu et al., 2016). Note that following their procedure we add an uniform noiseof 1/256 to the (scaled) test point before evaluation to avoid overevaluating models that might haveoverfitted on the 8 bit quantization of pixel values. Another comparison tool that we used is theInception score as in Salimans et al. (2016) which was developed for natural images and is thusmost relevant for CIFAR-10. Since Salimans et al. (2016) used a GAN trained in a semi-supervisedway with some tricks, the comparison with our unsupervised trained model isn’t straightforward.However, we can see in Table 3 that our model outperforms the traditional GAN trained withoutlabeled data.4.2 S AMPLE GENERATIONAnother common qualitative way to evaluate generative models is to look at the quality of the sam-ples generated by the model. In Figure 4 we show various samples on each of the datasets we used.In order to get sharper images, we use at sampling time more denoising steps than in the trainingtime (In the MNIST case we use 30 denoising steps for sampling with a model trained on 15 denois-ing steps). To make sure that our network didn’t learn to copy the training set, we show in the lastcolumn the nearest training-set neighbor to the samples in the next-to last column. We can see thatour training method allow to generate very sharp and accurate samples on various dataset.5We don’t share batch norm parameters across the network, i.e for each time step we have different param-eters and independent batch statistics.8Published as a conference paper at ICLR 2017Figure 3: Training curves: lower bounds on aver-age log-likelihood on MNIST as infusion trainingprogresses. We also show the lower bounds esti-mated with the Parzen estimation method.Model TestDBM (Bengio et al., 2013) 1382SCAE (Bengio et al., 2013) 1211:6GSN (Bengio et al., 2014) 2141:1Diffusion (Sohl-Dicksteinet al., 2015)2201:9GANs (Goodfellow et al.) 2252GMMN + AE (Li et al.) 2822Infusion training (Our) 3121:7Table 1: Parzen-window-based estimator oflower bound on average test log-likelihoodon MNIST (in nats).Table 2: Log-likelihood (in nats) estimated by AIS on MNIST test and training sets as reported inWu et al. (2016) and the log likelihood estimates of our model obtained by infusion training (lastthree lines). Our initial model uses a Gaussian output with diagonal covariance, and we appliedboth our lower bound and importance sampling (IS) log-likelihood estimates to it. Since Wu et al.(2016) used only an isotropic output observation model, in order to be comparable to them, we alsoevaluated our model after replacing the output by an isotropic Gaussian output (same fixed variancefor all pixels). Average and standard deviation over 10 repetitions of the evaluation are provided.Note that AIS might provide a higher evaluation of likelihood than our current IS estimate, but thisis left for future work.Model Test log-likelihood (1000ex) Train log-likelihood (100ex)V AE-50 (AIS) 991:4356:477 1272:5866:759GAN-50 (AIS) 627:2978:813 620:49831:012GMMN-50 (AIS) 593:4728:591 571:80330:864V AE-10 (AIS) 705:3757:411 780:19619:147GAN-10 (AIS) 328:7725:538 318:94822:544GMMN-10 (AIS) 346:6795:860 345:17619:893Infusion training + isotropic(IS estimate)413:2970:460 450:6951:617Infusion training (ISestimate)1836:270:551 1837:5601:074Infusion training (lowerbound)1350:5980:079 1230:3050:5329Published as a conference paper at ICLR 2017(a) MNIST (b) Toronto Face Dataset(c) CIFAR-10 (d) CelebAFigure 4: Mean predictions by our models on 4 different datasets. The rightmost column shows thenearest training example to the samples in the next-to last column.10Published as a conference paper at ICLR 2017Figure 5: Inpainting on CelebA dataset. In each row, from left to right: an image form the testset; the same image with bottom half randomly sampled from our factorial prior. Then several endsamples from our sampling chain in which the top part is clamped. The generated samples showthat our model is able to generate a varied distribution of coherent face completions.4.3 I NPAINTINGAnother method to evaluate a generative model is inpainting . It consists of providing only a partialimage from the test set and letting the model generate the missing part. In one experiment, weprovide only the top half of CelebA test set images and clamp that top half throughout the samplingchain. We restart sampling from our model several times, to see the variety in the distribution of thebottom part it generates. Figure 5 shows that the model is able to generate a varied set of bottomhalves, all consistent with the same top half, displaying different type of smiles and expression. Wealso see that the generated bottom halves transfer some information about the provided top half ofthe images (such as pose and more or less coherent hair cut).5 C ONCLUSION AND FUTURE WORKWe presented a new training procedure that allows a neural network to learn a transition operatorof a Markov chain. Compared to the previously proposed method of Sohl-Dickstein et al. (2015)based on inverting a slow diffusion process, we showed empirically that infusion training requiresfar fewer denoising steps, and appears to provide more accurate models. Currently, many success-ful generative models, judged on sample quality, are based on GAN architectures. However theserequire to use two different networks, a generator and a discriminator, whose balance is reputed del-icate to adjust, which can be source of instability during training. Our method avoids this problemby using only a single network and a simpler training objective.Denoising-based infusion training optimizes a heuristic surrogate loss for which we cannot (yet)provide theoretical guarantees, but we empirically verified that it results in increasing log-likelihoodestimates. On the other hand the lower-bound-based infusion training procedure does maximize anexplicit variational lower-bound on the log-likelihood. While we have run most of our experimentswith the former, we obtained similar results on the few problems we tried with lower-bound-basedinfusion training.Future work shall further investigate the relationship and quantify the compromises achieved withrespect to other Markov Chain methods including Sohl-Dickstein et al. (2015); Salimans et al. (2015)11Published as a conference paper at ICLR 2017and also to powerful inference methods such as Rezende & Mohamed (2015). As future work, wealso plan to investigate the use of more sophisticated neural net generators, similar to DCGAN’s(Radford et al., 2016) and to extend the approach to a conditional generator applicable to structuredoutput problems.ACKNOWLEDGMENTSWe would like to thank the developers of Theano (Theano Development Team, 2016) for making thislibrary available to build on, Compute Canada and Nvidia for their computation resources, NSERCand Ubisoft for their financial support, and three ICLR anonymous reviewers for helping us improveour paper.REFERENCESGuillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang,and Pascal Vincent. GSNs: generative stochastic networks. Information and Inference , 2016. doi:10.1093/imaiai/iaw003.Yoshua Bengio, Gr ́egoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen-tations. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013) ,2013.Yoshua Bengio, Eric Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochasticnetworks trainable by backprop. In Proceedings of the 31st International Conference on MachineLearning (ICML 2014) , pp. 226–234, 2014.Olivier Breuleux, Yoshua Bengio, and Pascal Vincent. Quickly generating representative samplesfrom an rbm-derived process. Neural Computation , 23(8):2058–2073, 2011.Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti-mation. arXiv preprint arXiv:1410.8516 , 2014.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InAistats , volume 15, pp. 275, 2011.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling,C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro-cessing Systems 27 , pp. 2672–2680. Curran Associates, Inc., 2014.Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, and Yoshua Bengio. The variational walkbackalgorithm. Technical report, Universit ́e de Montr ́eal, 2017. URL https://openreview.net/forum?id=rkpdnIqlx . On openreview.net.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. Proceedings of The 32nd International Conference on MachineLearning , pp. 448–456, 2015.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2ndInternational Conference on Learning Representations (ICLR 2014) , 2014.Alex. Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images.Master’s thesis, Department of Computer Science, University of Toronto , 2009.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning (ICML 2015) , pp. 1718–1727, 2015.12Published as a conference paper at ICLR 2017Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV 2015) , December 2015.A ̈aron van den Oord Lucas Theis and Matthias Bethge. A note on the evaluation of generativemodels. In Proceedings of the 4th International Conference on Learning Representations (ICLR2016) , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. International Conference on Learning Represen-tations , 2016.Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedingsof the 32nd International Conference on Machine Learning (ICML 2015) , pp. 1530–1538, 2015.Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation andapproximate inference in deep generative models. In Proceedings of the 31th International Con-ference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014 , pp. 1278–1286,2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.html .Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sam-pling contractive auto-encoders. In Proceedings of the 29th International Conference on MachineLearning (ICML 2012) , 2012.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1,pp. 3, 2009.Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variationalinference: Bridging the gap. In Proceedings of The 32nd International Conference on MachineLearning , pp. 1218–1226, 2015.Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. CoRR , abs/1606.03498, 2016.Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsuper-vised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd InternationalConference on Machine Learning , volume 37 of JMLR Proceedings , pp. 2256–2265. JMLR.org,2015.Josh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Depart-ment of Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep , 3, 2010.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, may 2016.A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.InProceedings of the 33nd International Conference on Machine Learning (ICML 2016) , pp.1747–1756, 2016.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research , 11(Dec):3371–3408, 2010.Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysisof decoder-based generative models. CoRR , abs/1611.04273, 2016.13Published as a conference paper at ICLR 2017A D ETAILS ON THE EXPERIMENTSA.1 MNIST EXPERIMENTSWe show the impact of the infusion rate (t)=(t1)+!for different numbers of training stepson the lower bound estimate of log-likelihood on the Validation set of MNIST in Figure 6. We alsoshow the quality of generated samples and the lower bound evaluated on the test set in Table 4. Eachexperiment in Table 4 uses the corresponding models of Figure 6 that obtained the best lower boundvalue on the validation set. We use the same network architecture as described in Section 4, i.e twofully connected layers with Relu activations composed of 1200 units followed by two distinct fullyconnected layers composed of 784 units, one that predicts the means, the other one that predictsthe variances. Each mean and variance is associated with one pixel. All of the the parameters ofthe model are shared across different steps except for the batch norm parameters. During training,we use the batch statistics of the current mini-batch in order to evaluate our model on the train andvalidation sets. At test time (Table 4), we first compute the batch statistics over the entire train setfor each step and then use the computed statistics to evaluate our model on the test test.We did some experiments to evaluate the impact of or!in(t)=(t1)+!. Figure 6 showsthat as the number of steps increases, the optimal value for infusion rate decreases. Therefore, if wewant to use many steps, we should have a small infusion rate. These conclusions are valid for bothincreasing and constant infusion rate. For example, the optimal for a constant infusion rate, inFigure 6e with 10 steps is 0.08 and in Figure 6f with 15 steps is 0.06. If the number of steps is notenough or the infusion rate is too small, the network will not be able to learn the target distributionas shown in the first rows of all subsection in Table 4.In order to show the impact of having a constant versus an increasing infusion rate, we show in Fig-ure 7 the samples created by infused and sampling chains. We observe that having a small infusionrate over many steps ensures a slow blending of the model distribution into the target distribution.In Table 4, we can see high lower bound values on the test set with few steps even if the modelcan’t generate samples that are qualitatively satisfying. These results indicate that we can’t rely onthe lower bound as the only evaluation metric and this metric alone does not necessarily indicatethe suitability of our model to generated good samples. However, it is still a useful tool to preventoverfitting (the networks in Figure 6e and 6f overfit when the infusion rate becomes too high).Concerning the samples quality, we observe that having a small infusion rate over an adequatenumber of steps leads to better samples.A.2 I NFUSION AND MODEL SAMPLING CHAINS ON NATURAL IMAGES DATASETSIn order to show the behavior of our model trained by Infusion on more complex datasets, weshow in Figure 8 chains on CIFAR-10 dataset and in Figure 9 chains on CelebA dataset. In eachFigure, the first sub-figure shows the chains infused by some test examples and the second sub-figure shows the model sampling chains. In the experiment on CIFAR-10, we use an increasingschedule(t)=(t1)+ 0:02with(0)= 0and 20 infusion steps (this corresponds to the trainingparameters). In the experiment on CelebA, we use an increasing schedule (t)=(t1)+ 0:01with(0)= 0and 15 infusion steps.14Published as a conference paper at ICLR 2017(a) Networks trained with 1 infusion step. Each in-fusion rate in the figure corresponds to (0). Sincewe have only one step, we have != 0.(b) Networks trained with 5 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(c) Networks trained with 10 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(d) Networks trained with 15 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(e) Networks trained with 10 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infusionrate in the figure corresponds to different values for(0).(f) Networks trained with 15 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infu-sion rate in the figure corresponds to different values(0).Figure 6: Training curves on MNIST showing the log likelihood lower bound (nats) for differentinfusion rate schedules and different number of steps. We use an increasing schedule (t)=(t1)+!. In each sub-figure for a fixed number of steps, we show the lower bound for different infusionrates.15Published as a conference paper at ICLR 2017Table 4: Infusion rate impact on the lower bound log-likelihood (test set) and the samples generatedby a network trained with different number of steps. Each sub-table corresponds to a fixed numberof steps. Each row corresponds to a different infusion rate, where we show its lower bound and alsoits corresponding generated samples from the trained model. Note that for images, we show themean of the Gaussian distributions instead of the true samples. As the number of steps increases, theoptimal infusion rate decreases. Higher number of steps contributes to better qualitative samples, asthe best samples can be seen with 15 steps using = 0:01.(a) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 1 step.infusion rate Lower bound (test) Means of the model0.0 824.340.05 885.350.1 967.250.15 1063.270.2 1115.150.25 1158.810:3 1209:390.4 1209.160.5 1132.050.6 1008.600.7 854.400.9 -161.37(b) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 5 stepsinfusion rate Lower bound (test)0.0 823.810.01 910.190.03 1142.430.05 1303.190.08 1406.380:1 1448:660.15 1397.410.2 1262.57(c) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 10 stepsinfusion rate Lower bound (test)0.0 824.420.01 1254.070:02 1389:120.03 1366:680.04 1223.470.05 1057.430.05 846.730.07 658.66(d) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 15 stepsinfusion rate Lower bound (test)0.0 824.500:01 1351:030.02 1066.600.03 609.100.04 876.930.05 -479.690.06 -941.7816Published as a conference paper at ICLR 2017(a) Chains infused with MNIST test set samplesby a constant rate ( (0)= 0:05; != 0) in 15steps.(b) Model sampling chains on MNIST using a net-work trained with a constant infusion rate ( (0)=0:05; != 0) in 15 steps.(c) Chains infused with MNIST test set samplesby an increasing rate ( (0)= 0:0; != 0:01) in15 steps.(d) Model sampling chains on MNIST using anetwork trained with an increasing infusion rate((0)= 0:0; != 0:01) in 15 steps.Figure 7: Comparing samples of constant infusion rate versus an increasing infusion rate on infusedand generated chains. The models are trained on MNIST in 15 steps. Note that having an increasinginfusion rate with a small value for !allows a slow convergence to the target distribution. In contrasthaving a constant infusion rate leads to a fast convergence to a specific point. Increasing infusionrate leads to more visually appealing samples. We observe that having an increasing infusion rateover many steps ensures a slow blending of the model distribution into the target distribution.17Published as a conference paper at ICLR 2017(a) Infusion chains on CIFAR-10. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CIFAR-10Figure 8: Infusion chains (Sub-Figure 8a) and model sampling chains (Sub-Figure 8b) on CIFAR-10.18Published as a conference paper at ICLR 2017(a) Infusion chains on CelebA. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CelebAFigure 9: Infusion chains (Sub-Figure 9a) and model sampling chains (Sub-Figure 9b) on CelebA.19
B1IXim1Ex
BJAFbaolg
ICLR.cc/2017/conference/-/paper589/official/review
{"title": "Interesting idea with lacking theoretical motivation and limited empirical evaluation", "rating": "6: Marginally above acceptance threshold", "review": "Summary:\nThis paper introduces a heuristic approach for training a deep directed generative model, where similar to the transition operator of a Markov chain each layer samples from the same conditional distribution. Similar to optimizing a variational lower bound, the approach is to approximate the gradient by replacing the posterior over latents with an alternative distribution. However, the approximating distribution is not updated to improve the lower bound but heuristically constructed in each step. A further difference to variational optimization is that the conditional distributions are optimized greedily rather than following the gradient of the joint log-likelihood.\n\nReview:\nThe proposed approach is interesting and to me seems worth exploring more. Given that there are approaches for training the same class of models which are 1) theoretically more sound, 2) of similar computational complexity, and 3) work well in practice (e.g. Rezende & Mohamed, 2015), I am nevertheless not sure of its potential to generate impact. My bigger concern, however, is that the empirical evaluation is still quite limited.\n\nI appreciate the authors included proper estimates of the log-likelihood. This will enable and encourage future comparisons with this method on continuous MNIST. However, the authors should point out that the numbers taken from Wu et al. (2016) are not representative of the performance of a VAE. (From the paper: \u201cTherefore, the log-likelihood values we report should not be compared directly against networks which have a more flexible observation model.\u201d \u201cSuch observation models can easily achieve much higher log-likelihood scores, [\u2026].\u201d)\n\nComparisons with inpainting results using other methods would have been nice. How practical is the proposed approach compared to other approaches? Similar to the diffusion approach by Sohl-Dickstein et al. (2015), the proposed approach seems to be both efficient and effective for inpainting. Not making this a bigger point and performing the proper evaluations seems like a missed opportunity.\n\nMinor:\n\u2013\u00a0I am missing citations for \u201cordered visible dimension sampling\u201d\n\u2013\u00a0Typos and frequent incorrect use of \\citet and \\citep", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Generate Samples from Noise through Infusion Training
["Florian Bordes", "Sina Honari", "Pascal Vincent"]
In this work, we investigate a novel training procedure to learn a generative model as the transition operator of a Markov chain, such that, when applied repeatedly on an unstructured random noise sample, it will denoise it into a sample that matches the target distribution from the training set. The novel training procedure to learn this progressive denoising operation involves sampling from a slightly different chain than the model chain used for generation in the absence of a denoising target. In the training chain we infuse information from the training target example that we would like the chains to reach with a high probability. The thus learned transition operator is able to produce quality and varied samples in a small number of steps. Experiments show competitive results compared to the samples generated with a basic Generative Adversarial Net.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=BJAFbaolg
https://openreview.net/pdf?id=BJAFbaolg
https://openreview.net/forum?id=BJAFbaolg&noteId=B1IXim1Ex
Published as a conference paper at ICLR 2017LEARNING TO GENERATE SAMPLES FROM NOISETHROUGH INFUSION TRAININGFlorian Bordes, Sina Honari, Pascal VincentMontreal Institute for Learning Algorithms (MILA)D ́epartement d’Informatique et de Recherche Op ́erationnelleUniversit ́e de Montr ́ealMontr ́eal, Qu ́ebec, Canadaffirstname.lastname@umontreal.ca gABSTRACTIn this work, we investigate a novel training procedure to learn a generative modelas the transition operator of a Markov chain, such that, when applied repeatedly onan unstructured random noise sample, it will denoise it into a sample that matchesthe target distribution from the training set. The novel training procedure to learnthis progressive denoising operation involves sampling from a slightly differentchain than the model chain used for generation in the absence of a denoising tar-get. In the training chain we infuse information from the training target examplethat we would like the chains to reach with a high probability. The thus learnedtransition operator is able to produce quality and varied samples in a small numberof steps. Experiments show competitive results compared to the samples gener-ated with a basic Generative Adversarial Net.1 I NTRODUCTION AND MOTIVATIONTo go beyond the relatively simpler tasks of classification and regression, advancing our ability tolearn good generative models of high-dimensional data appears essential. There are many scenarioswhere one needs to efficiently produce good high-dimensional outputs where output dimensionshave unknown intricate statistical dependencies: from generating realistic images, segmentations,text, speech, keypoint or joint positions, etc..., possibly as an answer to the same, other, or multipleinput modalities. These are typically cases where there is not just one right answer but a variety ofequally valid ones following a non-trivial and unknown distribution. A fundamental ingredient forsuch scenarios is thus the ability to learn a good generative model from data, one from which wecan subsequently efficiently generate varied samples of high quality.Many approaches for learning to generate high dimensional samples have been and are still activelybeing investigated. These approaches can be roughly classified under the following broad categories:Ordered visible dimension sampling (van den Oord et al., 2016; Larochelle & Murray,2011). In this type of auto-regressive approach, output dimensions (or groups of condition-ally independent dimensions) are given an arbitrary fixed ordering, and each is sampledconditionally on the previous sampled ones. This strategy is often implemented using arecurrent network (LSTM or GRU). Desirable properties of this type of strategy are thatthe exact log likelihood can usually be computed tractably, and sampling is exact. Unde-sirable properties follow from the forced ordering, whose arbitrariness feels unsatisfactoryespecially for domains that do not have a natural ordering (e.g. images), and imposes forhigh-dimensional output a long sequential generation that can be slow.Undirected graphical models with multiple layers of latent variables. These make infer-ence, and thus learning, particularly hard and tend to be costly to sample from (Salakhutdi-nov & Hinton, 2009).Directed graphical models trained as variational autoencoders (V AE) (Kingma & Welling,2014; Rezende et al., 2014)Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Published as a conference paper at ICLR 2017Adversarially-trained generative networks. (GAN)(Goodfellow et al., 2014)Stochastic neural networks, i.e. networks with stochastic neurons, trained by an adaptedform of stochastic backpropagationGenerative uses of denoising autoencoders (Vincent et al., 2010) and their generalizationas Generative Stochastic Networks (Alain et al., 2016)Inverting a non-equilibrium thermodynamic slow diffusion process (Sohl-Dickstein et al.,2015)Continuous transformation of a distribution by invertible functions (Dinh et al. (2014), alsoused for variational inference in Rezende & Mohamed (2015))Several of these approaches are based on maximizing an explicit or implicit model log-likelihood ora lower bound of its log-likelihood, but some successful ones are not e.g. GANs. The approach wepropose here is based on the notion of “denoising” and thus takes its root in denoising autoencodersand the GSN type of approaches. It is also highly related to the non-equilibrium thermodynamicsinverse diffusion approach of Sohl-Dickstein et al. (2015). One key aspect that distinguishes thesetypes of methods from others listed above is that sample generation is achieved thanks to a learnedstochastic mapping from input space to input space, rather than from a latent-space to input-space.Specifically, in the present work, we propose to learn to generate high quality samples through aprocess of progressive ,stochastic, denoising , starting from a simple initial “noise” sample generatedin input space from a simple factorial distribution i.e. one that does not take into account anydependency or structure between dimensions. This, in effect, amounts to learning the transitionoperator of a Markov chain operating on input space. Starting from such an initial “noise” input,and repeatedly applying the operator for a small fixed number Tof steps, we aim to obtain a highquality resulting sample, effectively modeling the training data distribution. Our training procedureuses a novel “target-infusion” technique, designed to slightly bias model sampling to move towardsa specific data point during training, and thus provide inputs to denoise which are likely under themodel’s sample generation paths. By contrast with Sohl-Dickstein et al. (2015) which consists ininverting a slow and fixed diffusion process, our infusion chains make a few large jumps and followthe model distribution as the learning progresses.The rest of this paper is structured as follows: Section 2 formally defines the model and trainingprocedure. Section 3 discusses and contrasts our approach with the most related methods fromthe literature. Section 4 presents experiments that validate the approach. Section 5 concludes andproposes future work directions.2 P ROPOSED APPROACH2.1 S ETUPWe are given a finite data set Dcontainingnpoints in Rd, supposed drawn i.i.d from an unknowndistribution q. The data set Dis supposed split into training, validation and test subsets Dtrain,Dvalid,Dtest. We will denote qtrain theempirical distribution associated to the training set, and usexto denote observed samples from the data set. We are interested in learning the parameters of agenerative model pconceived as a Markov Chain from which we can efficiently sample. Note thatwe are interested in learning an operator that will display fast “burn-in” from the initial factorial“noise” distribution, but beyond the initial Tsteps we are not concerned about potential slow mixingor being stuck. We will first describe the sampling procedure used to sample from a trained model,before explaining our training procedure.2.2 G ENERATIVE MODEL SAMPLING PROCEDUREThe generative model pisdefined as the following sampling procedure:Using a simple factorial distribution p(0)(z(0)), draw an initial sample z(0)p(0), wherez(0)2Rd. Sincep(0)is factorial, the dcomponents of z(0)are independent: p0cannotmodel any dependency structure. z(0)can be pictured as essentially unstructured randomnoise.Repeatedly apply Ttimes a stochastic transition operator p(t)(z(t)jz(t1)), yielding a more“denoised” sample z(t)p(t)(z(t)jz(t1)), where all z(t)2Rd.2Published as a conference paper at ICLR 2017Figure 1: The model sampling chain . Each row shows a sample from p(z(0);:::;z(T))for a modelthat has been trained on MNIST digits. We see how the learned Markov transition operator progres-sively denoises an initial unstructured noise sample. We can also see that there remains ambiguity inthe early steps as to what digit this could become. This ambiguity gets resolved only in later steps.Even after a few initial steps, stochasticity could have made a chain move to a different final digitshape.Output z(T)as the final generated sample. Our generative model distri-bution is thus p(z(T)), the marginal associated to joint p(z(0);:::;z(T)) =p(0)(z(0))QTt=1p(t)(z(t)jz(t1)).In summary, samples from model pare generated, starting with an initial sample from a simpledistributionp(0), by taking the Tthsample along Markov chain z(0)!z(1)!z(2)!:::!z(T)whose transition operator is p(t)(z(t)jz(t1)). We will call this chain the model sampling chain .Figure 1 illustrates this sampling procedure using a model (i.e. transition operator) that was trainedon MNIST. Note that we impose no formal requirement that the chain converges to a stationarydistribution, as we simply read-out z(T)as the samples from our model p. The chain also needs notbe time-homogeneous, as highlighted by notation p(t)for the transitions.The set of parameters of modelpcomprise the parameters of p(0)and the parameters of tran-sition operator p(t)(z(t)jz(t1)). For tractability, learnability, and efficient sampling, these dis-tributions will be chosen factorial, i.e. p(0)(z(0)) =Qdi=1p(0)i(z(0)i)andp(t)(z(t)jz(t1)) =Qdi=1p(t)i(z(t)ijz(t1)). Note that the conditional distribution of an individual component i,p(t)i(z(t)ijz(t1))may however be multimodal, e.g. a mixture in which case p(t)(z(t)jz(t1))wouldbe a product of independent mixtures (conditioned on z(t1)), one per dimension. In our exper-iments, we will take the p(t)(z(t)jz(t1))to be simple diagonal Gaussian yielding a Deep LatentGaussian Model (DLGM) as in Rezende et al. (2014).2.3 I NFUSION TRAINING PROCEDUREWe want to train the parameters of model psuch that samples from Dtrain are likely of being gener-ated under the model sampling chain . Let(0)be the parameters of p(0)and let(t)be the parametersofp(t)(z(t)jz(t1)). Note that parameters (t)fort>0can straightforwardly be shared across timesteps, which we will be doing in practice. Having committed to using (conditionally) factorial dis-tributions for our p(0)(z(0))andp(t)(z(t)jz(t1)), that are both easy to learn and cheap to samplefrom, let us first consider the following greedy stagewise procedure. We can easily learn p(0)i(z(0))to model the marginal distribution of each component xiof the input, by training it by gradientdescent on a maximum likelihood objective, i.e.(0)= arg maxExqtrainhlogp(0)(x;)i(1)This gives us a first, very crude unstructured (factorial) model of q.3Published as a conference paper at ICLR 2017Having learned this p(0), we might be tempted to then greedily learn the next stage p(1)ofthe chain in a similar fashion, after drawing samples z(0)p(0)in an attempt to learn to“denoise” the sampled z(0)intox. Yet the corresponding following training objective (1)=arg maxExqtrain;z(0)p(0)logp(1)(xjz(0);)makes no sense: xandz(0)are sampled inde-pendently of each other so z(0)contains no information about x, hencep(1)(xjz(0)) =p(1)(x). Somaximizing this second objective becomes essentially the same as what we did when learning p(0).We would learn nothing more. It is essential, if we hope to learn a useful conditional distributionp(1)(xjz(0))that it be trained on particular z(0)containing some information about x. In otherwords, we should not take our training inputs to be samples from p(0)but from a slightly differentdistribution, biased towards containing some information about x. Let us call it q(0)(z(0)jx). Anatural choice for it, if it were possible, would be to take q(0)(z(0)jx) =p(z(0)jz(T)=x)but thisis an intractable inference, as all intermediate z(t)between z(0)andz(T)are effectively latent statesthat we would need to marginalize over. Using a workaround such as a variational or MCMC ap-proach would be a usual fallback. Instead, let us focus on our initial intent of guiding a progressivestochastic denoising, and think if we can come up with a different way to construct q(0)(z(0)jx)andsimilarly for the next steps q(t)i(~z(t)ij~z(t1);x).Eventually, we expect a sequence of samples from Markov chain pto move from initial “noise”towards a specific example xfrom the training set rather than another one, primarily if a samplealong the chain “resembles” xto some degree. This means that the transition operator should learnto pick up a minor resemblance with an xin order to transition to something likely to be evenmore similar to x. In other words, we expect samples along a chain leading to xto both havehigh probability under the transition operator of the chain p(t)(z(t)jz(t1)),andto have some formof at least partial “resemblance” with xlikely to increase as we progress along the chain. Onehighly inefficient way to emulate such a chain of samples would be, for teach step t, to samplemany candidate samples from the transition operator (a conditionally factorial distribution) until wegenerate one that has some minimal “resemblance” to x(e.g. for a discrete space, this resemblancemeasure could be based on their Hamming distance). A qualitatively similar result can be obtainedat a negligible cost by sampling from a factorial distribution that is very close to the one given by thetransition operator, but very slightly biased towards producing something closer to x. Specifically,we can “infuse” a little of xinto our sample by choosing for each input dimension, whether wesample it from the distribution given for that dimension by the transition operator, or whether, witha small probability, we take the value of that dimension from x. Samples from this biased chain, inwhich we slightly “infuse” x, will provide us with the inputs of our input-target training pairs forthe transition operator. The target part of the training pairs is simply x.2.3.1 T HE INFUSION CHAINFormally we define an infusion chain ez(0)!ez(1)!:::!ez(T1)whose distributionq(ez(0);:::;ez(T1)jx)will be “close” to the sampling chain z(0)!z(1)!z(2)!:::!z(T1)of modelpin the sense that q(t)(~z(t)j~z(t1);x)will be close to p(t)(z(t)jz(t1)), but will at ev-ery step be slightly biased towards generating samples closer to target x, i.e. xgets progres-sively “infused” into the chain. This is achieved by defining q(0)i(ez(0)ijx)as a mixture betweenp(0)i(with a large mixture weight) and xi, a concentrated unimodal distribution around xi, suchas a Gaussian with small variance (with a small mixture weight)1. Formally q(0)i(~z(0)ijx) =(1(t))p(0)i(~z(0)i) +(t)xi(~z(0)i), where 1(t)and(t)are the mixture weights2. Inother words, when sampling a value for ~z(0)ifromq(0)ithere will be a small probability (0)to pick value close to xi(as sampled from xi) rather than sampling the value from p(0)i. Wecall(t)theinfusion rate . We define the transition operator of the infusion chain similarly as:q(t)i(~z(t)ij~z(t1);x) = (1(t))p(t)i(~z(t)ij~ z(t1)) +(t)xi(~z(t)i).1Note thatxidoes not denote a Dirac-Delta but a Gaussian with small sigma.2In all experiments, we use an increasing schedule (t)=(t1)+!with(0)and!constant. This allowsto build our chain such that in the first steps, we give little information about the target and in the last steps wegive more informations about the target. This forces the network to have less confidence (greater incertitude)at the beginning of the chain and more confidence on the convergence point at the end of the chain.4Published as a conference paper at ICLR 2017Figure 2: Training infusion chains, infused with target x= . This figure shows the evolutionof chainq(z(0);:::;z(30)jx)as training on MNIST progresses. Top row is after network randomweight initialization. Second row is after 1 training epochs, third after 2 training epochs, and so on.Each of these images were at a time provided as the input part of the ( input ,target ) training pairs forthe network. The network was trained to denoise all of them into target 3. We see that as trainingprogresses, the model has learned to pick up the cues provided by target infusion, to move towardsthat target. Note also that a single denoising step, even with target infusion, is not sufficient for thenetwork to produce a sharp well identified digit.2.3.2 D ENOISING -BASED INFUSION TRAINING PROCEDUREFor all x2Dtrain:Sample from the infusion chain ~ z= (~z(0);:::; ~z(T1))q(~z(0);:::; ~z(T1)jx).precisely so: ~z0q(0)(~z(0)jx):::~z(t)q(t)(~z(t)j~z(t1);x):::Perform a gradient step so that plearns to “denoise” every ~z(t)intox.(t) (t)+(t)@logp(t)(xj~z(t1);(t))@(t)where(t)is a scalar learning rate.3As illustrated in Figure 2, the distribution of samples from the infusion chain evolves as trainingprogresses, since this chain remains close to the model sampling chain.2.4 S TOCHASTIC LOG LIKELIHOOD ESTIMATIONThe exact log-likelihood of the generative model implied by our model pis intractable. The log-probability of an example xcan however be expressed using proposal distribution qas:logp(x) = log Eq(ezjx)p(~ z;x)q(ezjx)(2)Using Jensen’s inequality we can thus derive the following lower bound:logp(x)Eq(ezjx)[logp(~ z;x)logq(ezjx)] (3)where logp(~ z;x) = logp(0)(~z(0)) +PT1t=1logp(t)(~z(t)j~z(t1))+ logp(T)(xj~z(T1))andlogq(~ zjx) = logq(0)(~z(0)jx) +PT1t=1logq(t)(~z(t)j~z(t1);x).3Since we will be sharing parameters between the p(t), in order for the expected larger error gradients onthe earlier transitions not to dominate the parameter updates over the later transitions we used an increasingschedule(t)=0tTfort2f1;:::;Tg.5Published as a conference paper at ICLR 2017A stochastic estimation can easily be obtained by replacing the expectation by an average using afew samples from q(ezjx). We can thus compute a lower bound estimate of the average log likelihoodover training, validation and test data.Similarly in addition to the lower-bound based on Eq.3 we can use the same few samples fromq(ezjx)to get an importance-sampling estimate of the likelihood based on Eq. 24.2.4.1 L OWER -BOUND -BASED INFUSION TRAINING PROCEDURESince we have derived a lower bound on the likelihood, we can alternatively choose to optimize thisstochastic lower-bound directly during training. This alternative lower-bound based infusion train-ing procedure differs only slightly from the denoising-based infusion training procedure by using~z(t)as a training target at step t(performing a gradient step to increase logp(t)(~z(t)j~z(t1);(t)))whereas denoising training always uses xas its target (performing a gradient step to increaselogp(t)(xj~z(t1);(t))). Note that the same reparametrization trick as used in Variational Auto-encoders (Kingma & Welling, 2014) can be used here to backpropagate through the chain’s Gaussiansampling.3 R ELATIONSHIP TO PREVIOUSLY PROPOSED APPROACHES3.1 M ARKOV CHAIN MONTE CARLO FOR ENERGY -BASED MODELSGenerating samples as a repeated application of a Markov transition operator that operates on inputspace is at the heart of Markov Chain Monte Carlo (MCMC) methods. They allow sampling from anenergy-model, where one can efficiently compute the energy or unnormalized negated log probabil-ity (or density) at any point. The transition operator is then derived from an explicit energy functionsuch that the Markov chain prescribed by a specific MCMC method is guaranteed to converge tothe distribution defined by that energy function, as the equilibrium distribution of the chain. MCMCtechniques have thus been used to obtain samples from the energy model, in the process of learningto adjust its parameters.By contrast here we do not learn an explicit energy function, but rather learn directly a parameterizedtransition operator, and define an implicit model distribution based on the result of running theMarkov chain.3.2 V ARIATIONAL AUTO -ENCODERSVariational auto-encoders (V AE) (Kingma & Welling, 2014; Rezende et al., 2014) also start froman unstructured (independent) noise sample and non-linearly transform this into a distribution thatmatches the training data. One difference with our approach is that the V AE typically maps from alower-dimensional space to the observation space. By contrast we learn a stochastic transition oper-ator from input space to input space that we repeat for Tsteps. Another key difference, is that theV AE learns a complex heavily parameterized approximate posterior proposal qwhereas our infusionbasedqcan be understood as a simple heuristic proposal distribution based on p. Importantly thespecific heuristic we use to infuse xintoqmakes sense precisely because our operator is a map frominput space to input space, and couldn’t be readily applied otherwise. The generative network inRezende et al. (2014) is a Deep Latent Gaussian Model (DLGM) just as ours. But their approximateposteriorqis taken to be factorial, including across all layers of the DLGM, whereas our infusionbasedqinvolves an ordered sampling of the layers, as we sample from q(t)(~z(t)j~z(t1);x).More recent proposals involve sophisticated approaches to sample from better approximate poste-riors, as the work of Salimans et al. (2015) in which Hamiltonian Monte Carlo is combined withvariational inference, which looks very promising, though computationally expensive, and Rezende& Mohamed (2015) that generalizes the use of normalizing flows to obtain a better approximateposterior.4Specifically, the two estimates (lower-bound and IS) start by collecting ksamples from q(ezjx)and com-puting for each the corresponding `= logp(~ z;x)logq(ezjx). The lower-bound estimate is then obtainedby averaging the resulting `1;:::` k, whereas the IS estimate is obtained by taking the logof the averagede`1;:::;e`k(in a numerical stable manner as logsumexp( `1;:::;` k)logk).6Published as a conference paper at ICLR 20173.3 S AMPLING FROM AUTOENCODERS AND GENERATIVE STOCHASTIC NETWORKSEarlier works that propose to directly learn a transition operator resulted from research to turn au-toencoder variants that have a stochastic component, in particular denoising autoencoders (Vincentet al., 2010), into generative models that one can sample from. This development is natural, sincea stochastic auto-encoder isa stochastic transition operator form input space to input space. Gen-erative Stochastic Networks (GSN) (Alain et al., 2016) generalized insights from earlier stochasticautoencoder sampling heuristics (Rifai et al., 2012) into a more formal and general framework.These previous works on generative uses of autoencoders and GSNs attempt to learn a chain whoseequilibrium distribution will fit the training data. Because autoencoders and the chain are typicallystarted from or very close to training data points, they are concerned with the chain mixing quicklybetween modes. By contrast our model chain is always restarted from unstructured noise, and isnot required to reach or even have an equilibrium distribution. Our concern is only what happensduring theT“burn-in” initial steps, and to make sure that it transforms the initial factorial noisedistribution into something that best fits the training data distribution. There are no mixing concernsbeyond those Tinitial steps.A related aspect and limitation of previous denoising autoencoder and GSN approaches is that thesewere mainly “local” around training samples: the stochastic operator explored space starting fromand primarily centered around training examples, and learned based on inputs in these parts of spaceonly. Spurious modes in the generated samples might result from large unexplored parts of spacethat one might encounter while running a long chain.3.4 R EVERSING A DIFFUSION PROCESS IN NON -EQUILIBRIUM THERMODYNAMICSThe approach of Sohl-Dickstein et al. (2015) is probably the closest to the approach we develop here.Both share a similar model sampling chain that starts from unstructured factorial noise. Neitherare concerned about an equilibrium distribution . They are however quite different in several keyaspects: Sohl-Dickstein et al. (2015) proceed to invert an explicit diffusion process that starts froma training set example and very slowly destroys its structure to become this random noise, they thenlearn to reverse this process i.e. an inverse diffusion . To maintain the theoretical argument thattheexact reverse process has the same distributional form (e.g. p(x(t1)jx(t))andp(x(t)jx(t1))both factorial Gaussians), the diffusion has to be infinitesimal by construction, hence the proposedapproaches uses chains with thousands of tiny steps. Instead, our aim is to learn an operator that canyield a high quality sample efficiently using only a small number Tof larger steps. Also our infusiontraining does not posit a fixed a priori diffusion process that we would learn to reverse. And whilethe distribution of diffusion chain samples of Sohl-Dickstein et al. (2015) is fixed and remains thesame all along the training, the distribution of our infusion chain samples closely follow the modelchain as our model learns. Our proposed infusion sampling technique thus adapts to the changinggenerative model distribution as the learning progresses.Drawing on both Sohl-Dickstein et al. (2015) and the walkback procedure introduced for GSN inAlain et al. (2016), a variational variant of the walkback algorithm was investigated by Goyal et al.(2017) at the same time as our work. It can be understood as a different approach to learning aMarkov transition operator, in which a “heating” diffusion operator is seen as a variational approxi-mate posterior to the forward “cooling” sampling operator with the exact same form and parameters,except for a different temperature.4 E XPERIMENTSWe trained models on several datasets with real-valued examples. We used as prior distributionp(0)a factorial Gaussian whose parameters were set to be the mean and variance for each pixelthrough the training set. Similarly, our models for the transition operators are factorial Gaussians.Their mean and elementwise variance is produced as the output of a neural network that receivesthe previous z(t1)as its input, i.e. p(t)(z(t)ijz(t1)) =N(i(z(t1));2i(z(t1)))whereand2are computed as output vectors of a neural network. We trained such a model using our infusiontraining procedure on MNIST (LeCun & Cortes, 1998), Toronto Face Database (Susskind et al.,2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). For all datasets, theonly preprocessing we did was to scale the integer pixel values down to range [0,1]. The network7Published as a conference paper at ICLR 2017Table 3: Inception score (with standard error) of 50 000 samples generated by models trained onCIFAR-10. We use the models in Salimans et al. (2016) as baseline. ’SP’ corresponds to the bestmodel described by Salimans et al. (2016) trained in a semi-supervised fashion. ’-L’ correspondsto the same model after removing the label in the training process (unsupervised way), ’-MBF’corresponds to a supervised training without minibatch features.Model Real data SP -L -MBF Infusion trainingInception score 11.24.12 8.09.07 4.36.06 3.87.03 4.62.06trained on MNIST and TFD is a MLP composed of two fully connected layers with 1200 unitsusing batch-normalization (Ioffe & Szegedy, 2015)5. The network trained on CIFAR-10 is basedon the same generator as the GANs of Salimans et al. (2016), i.e. one fully connected layer followedby three transposed convolutions. CelebA was trained with the previous network where we addedanother transposed convolution. We use rectifier linear units (Glorot et al., 2011) on each layerinside the networks. Each of those networks have two distinct final layers with a number of unitscorresponding to the image size. They use sigmoid outputs, one that predict the mean and the secondthat predict a variance scaled by a scalar (In our case we chose = 0:1) and we add an epsilon= 1e4to avoid an excessively small variance. For each experiment, we trained the networkon 15 steps of denoising with an increasing infusion rate of 1% ( != 0:01;(0)= 0), except onCIFAR-10 where we use an increasing infusion rate of 2% ( != 0:02;(0)= 0) on 20 steps.4.1 N UMERICAL RESULTSSince we can’t compute the exact log-likelihood, the evaluation of our model is not straightforward.However we use the lower bound estimator derived in Section 2.4 to evaluate our model during train-ing and prevent overfitting (see Figure 3). Since most previous published results on non-likelihoodbased models (such as GANs) used a Parzen-window-based estimator (Breuleux et al., 2011), we useit as our first comparison tool, even if it can be misleading (Lucas Theis & Bethge, 2016). Resultsare shown in Table 1, we use 10 000 generated samples and = 0:17. To get a better estimate ofthe log-likelihood, we then computed both the stochastic lower bound and the importance samplingestimate (IS) given in Section 2.4. For the IS estimate in our MNIST-trained model, we used 20000 intermediates samples. In Table 2 we compare our model with the recent Annealed ImportanceSampling results (Wu et al., 2016). Note that following their procedure we add an uniform noiseof 1/256 to the (scaled) test point before evaluation to avoid overevaluating models that might haveoverfitted on the 8 bit quantization of pixel values. Another comparison tool that we used is theInception score as in Salimans et al. (2016) which was developed for natural images and is thusmost relevant for CIFAR-10. Since Salimans et al. (2016) used a GAN trained in a semi-supervisedway with some tricks, the comparison with our unsupervised trained model isn’t straightforward.However, we can see in Table 3 that our model outperforms the traditional GAN trained withoutlabeled data.4.2 S AMPLE GENERATIONAnother common qualitative way to evaluate generative models is to look at the quality of the sam-ples generated by the model. In Figure 4 we show various samples on each of the datasets we used.In order to get sharper images, we use at sampling time more denoising steps than in the trainingtime (In the MNIST case we use 30 denoising steps for sampling with a model trained on 15 denois-ing steps). To make sure that our network didn’t learn to copy the training set, we show in the lastcolumn the nearest training-set neighbor to the samples in the next-to last column. We can see thatour training method allow to generate very sharp and accurate samples on various dataset.5We don’t share batch norm parameters across the network, i.e for each time step we have different param-eters and independent batch statistics.8Published as a conference paper at ICLR 2017Figure 3: Training curves: lower bounds on aver-age log-likelihood on MNIST as infusion trainingprogresses. We also show the lower bounds esti-mated with the Parzen estimation method.Model TestDBM (Bengio et al., 2013) 1382SCAE (Bengio et al., 2013) 1211:6GSN (Bengio et al., 2014) 2141:1Diffusion (Sohl-Dicksteinet al., 2015)2201:9GANs (Goodfellow et al.) 2252GMMN + AE (Li et al.) 2822Infusion training (Our) 3121:7Table 1: Parzen-window-based estimator oflower bound on average test log-likelihoodon MNIST (in nats).Table 2: Log-likelihood (in nats) estimated by AIS on MNIST test and training sets as reported inWu et al. (2016) and the log likelihood estimates of our model obtained by infusion training (lastthree lines). Our initial model uses a Gaussian output with diagonal covariance, and we appliedboth our lower bound and importance sampling (IS) log-likelihood estimates to it. Since Wu et al.(2016) used only an isotropic output observation model, in order to be comparable to them, we alsoevaluated our model after replacing the output by an isotropic Gaussian output (same fixed variancefor all pixels). Average and standard deviation over 10 repetitions of the evaluation are provided.Note that AIS might provide a higher evaluation of likelihood than our current IS estimate, but thisis left for future work.Model Test log-likelihood (1000ex) Train log-likelihood (100ex)V AE-50 (AIS) 991:4356:477 1272:5866:759GAN-50 (AIS) 627:2978:813 620:49831:012GMMN-50 (AIS) 593:4728:591 571:80330:864V AE-10 (AIS) 705:3757:411 780:19619:147GAN-10 (AIS) 328:7725:538 318:94822:544GMMN-10 (AIS) 346:6795:860 345:17619:893Infusion training + isotropic(IS estimate)413:2970:460 450:6951:617Infusion training (ISestimate)1836:270:551 1837:5601:074Infusion training (lowerbound)1350:5980:079 1230:3050:5329Published as a conference paper at ICLR 2017(a) MNIST (b) Toronto Face Dataset(c) CIFAR-10 (d) CelebAFigure 4: Mean predictions by our models on 4 different datasets. The rightmost column shows thenearest training example to the samples in the next-to last column.10Published as a conference paper at ICLR 2017Figure 5: Inpainting on CelebA dataset. In each row, from left to right: an image form the testset; the same image with bottom half randomly sampled from our factorial prior. Then several endsamples from our sampling chain in which the top part is clamped. The generated samples showthat our model is able to generate a varied distribution of coherent face completions.4.3 I NPAINTINGAnother method to evaluate a generative model is inpainting . It consists of providing only a partialimage from the test set and letting the model generate the missing part. In one experiment, weprovide only the top half of CelebA test set images and clamp that top half throughout the samplingchain. We restart sampling from our model several times, to see the variety in the distribution of thebottom part it generates. Figure 5 shows that the model is able to generate a varied set of bottomhalves, all consistent with the same top half, displaying different type of smiles and expression. Wealso see that the generated bottom halves transfer some information about the provided top half ofthe images (such as pose and more or less coherent hair cut).5 C ONCLUSION AND FUTURE WORKWe presented a new training procedure that allows a neural network to learn a transition operatorof a Markov chain. Compared to the previously proposed method of Sohl-Dickstein et al. (2015)based on inverting a slow diffusion process, we showed empirically that infusion training requiresfar fewer denoising steps, and appears to provide more accurate models. Currently, many success-ful generative models, judged on sample quality, are based on GAN architectures. However theserequire to use two different networks, a generator and a discriminator, whose balance is reputed del-icate to adjust, which can be source of instability during training. Our method avoids this problemby using only a single network and a simpler training objective.Denoising-based infusion training optimizes a heuristic surrogate loss for which we cannot (yet)provide theoretical guarantees, but we empirically verified that it results in increasing log-likelihoodestimates. On the other hand the lower-bound-based infusion training procedure does maximize anexplicit variational lower-bound on the log-likelihood. While we have run most of our experimentswith the former, we obtained similar results on the few problems we tried with lower-bound-basedinfusion training.Future work shall further investigate the relationship and quantify the compromises achieved withrespect to other Markov Chain methods including Sohl-Dickstein et al. (2015); Salimans et al. (2015)11Published as a conference paper at ICLR 2017and also to powerful inference methods such as Rezende & Mohamed (2015). As future work, wealso plan to investigate the use of more sophisticated neural net generators, similar to DCGAN’s(Radford et al., 2016) and to extend the approach to a conditional generator applicable to structuredoutput problems.ACKNOWLEDGMENTSWe would like to thank the developers of Theano (Theano Development Team, 2016) for making thislibrary available to build on, Compute Canada and Nvidia for their computation resources, NSERCand Ubisoft for their financial support, and three ICLR anonymous reviewers for helping us improveour paper.REFERENCESGuillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang,and Pascal Vincent. GSNs: generative stochastic networks. Information and Inference , 2016. doi:10.1093/imaiai/iaw003.Yoshua Bengio, Gr ́egoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen-tations. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013) ,2013.Yoshua Bengio, Eric Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochasticnetworks trainable by backprop. In Proceedings of the 31st International Conference on MachineLearning (ICML 2014) , pp. 226–234, 2014.Olivier Breuleux, Yoshua Bengio, and Pascal Vincent. Quickly generating representative samplesfrom an rbm-derived process. Neural Computation , 23(8):2058–2073, 2011.Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti-mation. arXiv preprint arXiv:1410.8516 , 2014.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InAistats , volume 15, pp. 275, 2011.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling,C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro-cessing Systems 27 , pp. 2672–2680. Curran Associates, Inc., 2014.Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, and Yoshua Bengio. The variational walkbackalgorithm. Technical report, Universit ́e de Montr ́eal, 2017. URL https://openreview.net/forum?id=rkpdnIqlx . On openreview.net.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. Proceedings of The 32nd International Conference on MachineLearning , pp. 448–456, 2015.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2ndInternational Conference on Learning Representations (ICLR 2014) , 2014.Alex. Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images.Master’s thesis, Department of Computer Science, University of Toronto , 2009.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning (ICML 2015) , pp. 1718–1727, 2015.12Published as a conference paper at ICLR 2017Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV 2015) , December 2015.A ̈aron van den Oord Lucas Theis and Matthias Bethge. A note on the evaluation of generativemodels. In Proceedings of the 4th International Conference on Learning Representations (ICLR2016) , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. International Conference on Learning Represen-tations , 2016.Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedingsof the 32nd International Conference on Machine Learning (ICML 2015) , pp. 1530–1538, 2015.Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation andapproximate inference in deep generative models. In Proceedings of the 31th International Con-ference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014 , pp. 1278–1286,2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.html .Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sam-pling contractive auto-encoders. In Proceedings of the 29th International Conference on MachineLearning (ICML 2012) , 2012.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1,pp. 3, 2009.Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variationalinference: Bridging the gap. In Proceedings of The 32nd International Conference on MachineLearning , pp. 1218–1226, 2015.Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. CoRR , abs/1606.03498, 2016.Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsuper-vised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd InternationalConference on Machine Learning , volume 37 of JMLR Proceedings , pp. 2256–2265. JMLR.org,2015.Josh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Depart-ment of Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep , 3, 2010.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, may 2016.A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.InProceedings of the 33nd International Conference on Machine Learning (ICML 2016) , pp.1747–1756, 2016.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research , 11(Dec):3371–3408, 2010.Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysisof decoder-based generative models. CoRR , abs/1611.04273, 2016.13Published as a conference paper at ICLR 2017A D ETAILS ON THE EXPERIMENTSA.1 MNIST EXPERIMENTSWe show the impact of the infusion rate (t)=(t1)+!for different numbers of training stepson the lower bound estimate of log-likelihood on the Validation set of MNIST in Figure 6. We alsoshow the quality of generated samples and the lower bound evaluated on the test set in Table 4. Eachexperiment in Table 4 uses the corresponding models of Figure 6 that obtained the best lower boundvalue on the validation set. We use the same network architecture as described in Section 4, i.e twofully connected layers with Relu activations composed of 1200 units followed by two distinct fullyconnected layers composed of 784 units, one that predicts the means, the other one that predictsthe variances. Each mean and variance is associated with one pixel. All of the the parameters ofthe model are shared across different steps except for the batch norm parameters. During training,we use the batch statistics of the current mini-batch in order to evaluate our model on the train andvalidation sets. At test time (Table 4), we first compute the batch statistics over the entire train setfor each step and then use the computed statistics to evaluate our model on the test test.We did some experiments to evaluate the impact of or!in(t)=(t1)+!. Figure 6 showsthat as the number of steps increases, the optimal value for infusion rate decreases. Therefore, if wewant to use many steps, we should have a small infusion rate. These conclusions are valid for bothincreasing and constant infusion rate. For example, the optimal for a constant infusion rate, inFigure 6e with 10 steps is 0.08 and in Figure 6f with 15 steps is 0.06. If the number of steps is notenough or the infusion rate is too small, the network will not be able to learn the target distributionas shown in the first rows of all subsection in Table 4.In order to show the impact of having a constant versus an increasing infusion rate, we show in Fig-ure 7 the samples created by infused and sampling chains. We observe that having a small infusionrate over many steps ensures a slow blending of the model distribution into the target distribution.In Table 4, we can see high lower bound values on the test set with few steps even if the modelcan’t generate samples that are qualitatively satisfying. These results indicate that we can’t rely onthe lower bound as the only evaluation metric and this metric alone does not necessarily indicatethe suitability of our model to generated good samples. However, it is still a useful tool to preventoverfitting (the networks in Figure 6e and 6f overfit when the infusion rate becomes too high).Concerning the samples quality, we observe that having a small infusion rate over an adequatenumber of steps leads to better samples.A.2 I NFUSION AND MODEL SAMPLING CHAINS ON NATURAL IMAGES DATASETSIn order to show the behavior of our model trained by Infusion on more complex datasets, weshow in Figure 8 chains on CIFAR-10 dataset and in Figure 9 chains on CelebA dataset. In eachFigure, the first sub-figure shows the chains infused by some test examples and the second sub-figure shows the model sampling chains. In the experiment on CIFAR-10, we use an increasingschedule(t)=(t1)+ 0:02with(0)= 0and 20 infusion steps (this corresponds to the trainingparameters). In the experiment on CelebA, we use an increasing schedule (t)=(t1)+ 0:01with(0)= 0and 15 infusion steps.14Published as a conference paper at ICLR 2017(a) Networks trained with 1 infusion step. Each in-fusion rate in the figure corresponds to (0). Sincewe have only one step, we have != 0.(b) Networks trained with 5 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(c) Networks trained with 10 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(d) Networks trained with 15 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(e) Networks trained with 10 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infusionrate in the figure corresponds to different values for(0).(f) Networks trained with 15 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infu-sion rate in the figure corresponds to different values(0).Figure 6: Training curves on MNIST showing the log likelihood lower bound (nats) for differentinfusion rate schedules and different number of steps. We use an increasing schedule (t)=(t1)+!. In each sub-figure for a fixed number of steps, we show the lower bound for different infusionrates.15Published as a conference paper at ICLR 2017Table 4: Infusion rate impact on the lower bound log-likelihood (test set) and the samples generatedby a network trained with different number of steps. Each sub-table corresponds to a fixed numberof steps. Each row corresponds to a different infusion rate, where we show its lower bound and alsoits corresponding generated samples from the trained model. Note that for images, we show themean of the Gaussian distributions instead of the true samples. As the number of steps increases, theoptimal infusion rate decreases. Higher number of steps contributes to better qualitative samples, asthe best samples can be seen with 15 steps using = 0:01.(a) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 1 step.infusion rate Lower bound (test) Means of the model0.0 824.340.05 885.350.1 967.250.15 1063.270.2 1115.150.25 1158.810:3 1209:390.4 1209.160.5 1132.050.6 1008.600.7 854.400.9 -161.37(b) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 5 stepsinfusion rate Lower bound (test)0.0 823.810.01 910.190.03 1142.430.05 1303.190.08 1406.380:1 1448:660.15 1397.410.2 1262.57(c) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 10 stepsinfusion rate Lower bound (test)0.0 824.420.01 1254.070:02 1389:120.03 1366:680.04 1223.470.05 1057.430.05 846.730.07 658.66(d) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 15 stepsinfusion rate Lower bound (test)0.0 824.500:01 1351:030.02 1066.600.03 609.100.04 876.930.05 -479.690.06 -941.7816Published as a conference paper at ICLR 2017(a) Chains infused with MNIST test set samplesby a constant rate ( (0)= 0:05; != 0) in 15steps.(b) Model sampling chains on MNIST using a net-work trained with a constant infusion rate ( (0)=0:05; != 0) in 15 steps.(c) Chains infused with MNIST test set samplesby an increasing rate ( (0)= 0:0; != 0:01) in15 steps.(d) Model sampling chains on MNIST using anetwork trained with an increasing infusion rate((0)= 0:0; != 0:01) in 15 steps.Figure 7: Comparing samples of constant infusion rate versus an increasing infusion rate on infusedand generated chains. The models are trained on MNIST in 15 steps. Note that having an increasinginfusion rate with a small value for !allows a slow convergence to the target distribution. In contrasthaving a constant infusion rate leads to a fast convergence to a specific point. Increasing infusionrate leads to more visually appealing samples. We observe that having an increasing infusion rateover many steps ensures a slow blending of the model distribution into the target distribution.17Published as a conference paper at ICLR 2017(a) Infusion chains on CIFAR-10. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CIFAR-10Figure 8: Infusion chains (Sub-Figure 8a) and model sampling chains (Sub-Figure 8b) on CIFAR-10.18Published as a conference paper at ICLR 2017(a) Infusion chains on CelebA. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CelebAFigure 9: Infusion chains (Sub-Figure 9a) and model sampling chains (Sub-Figure 9b) on CelebA.19
H1PYkpbEx
BJAFbaolg
ICLR.cc/2017/conference/-/paper589/official/review
{"title": "Clearly written paper pursuing an interesting idea. Some shortcomings with respect to the evaluation and comparison to prior work", "rating": "7: Good paper, accept", "review": "The paper presents a method for training a generative model via an iterative denoising procedure. The denoising process is initialized with a random sample from a crude approximation to the data distribution and produces a high quality sample via multiple denoising steps. Training is performed by setting-up a Markov chain that slowly blends propositions from the current denoising model with a real example from the data distribution; using this chain the current denoising model is updated towards reproducing the changed, \"better\", samples from the blending process.\n\nThis is a clearly written paper that considers an interesting approach for training generative models. I was intrigued by the simplicity of the presented approach and really enjoyed reading the paper.\nThe proposed method is novel although it has clear ties to other recent work aiming to use denoising models for sampling from distributions such as the work by Sohl-Dickstein and the recent work on using DAEs as generative models.\nI think this general direction of research is important. The proposed procedure takes inspiration from the perspective of generating samples by minimizing an energy function via transitions along a Markov chain and, if successful, it can potentially sidestep many problems of current procedures for training directed generative models such as:\n- convergence and mode coverage problems as in generative adversarial networks\n- problems with modeling multi-modal distributions which can arise when a too restrictive approximate inference model is paired with a powerful generative model\n\nThat being said, another method that seems promising for addressing these issues that also has some superficially similarity to the presented work is the idea of combining Hamiltonian Monte Carlo inference with variational inference as in [1]. As such I am not entirely convinced that the method presented here will be able to perform better than the mentioned paper; although it might be simpler to train. Similarly, although I agree that using a MCMC chain to generate samples via a MC-EM like procedure is likely very costly I am not convinced such a procedure won't at least also work reasonably well for the simple MNIST example. In general a more direct comparison between different inference methods using an MCMC chain like procedure would be nice to have but I understand that this is perhaps out of the scope of this paper. One thing that I would have expected, however, is a direct comparison to the procedure from Sohl-Dickstein in terms of sampling steps and generation quality as it is so directly related.\n\nOther major points (good and bad):\n- Although in general the method is explained well some training details are missing. Most importantly it is never mentioned how alpha or omega are set (I am assuming omega is 0.01 as that is the increase mentioned in the experimental setup). It is also unclear how alpha affects the capabilities of the generator. While it intuitively seems reasonable to use a small alpha over many steps to ensure slow blending of the two distributions it is not clear how necessary this is or at what point the procedure would break (I assume alpha = 1 won't work as the generator then would have to magically denoise a sample from the relatively uninformative draw from p0 ?). The authors do mention in one of the figure captions that the denoising model does not produce good samples in only 1-2 steps but that might also be an artifact of training the model with small alpha (at least I see no a priori reason for this). More experiments should be carried out here.\n- No infusion chains or generating chains are shown for any of the more complicated data distributions, this is unfortunate as I feel these would be interesting to look at.\n- The paper does a good job at evaluating the model with respect to several different metrics. The bound on the log-likelihood is nice to have as well!\n- Unfortunately the current approach does not come with any theoretical guarantees. It is unclear for what choices of alpha the procedure will work and whether there is some deeper connection to MCMC sampling or energy based models. In my eyes this does not subtract from the value of the paper but would perhaps be worth a short sentence in the conclusion.\n\nMinor points:\n- The second reference seems broken\n- Figure 3 starts at 100 epochs and, as a result, contains little information. Perhaps it would be more useful to show the complete training procedure and put the x-axis on a log-scale ?\n- The explanation regarding the convolutional networks you use makes no sense to me. You write that you use the same structure as in the \"Improved GANs\" paper which, unlike your model, generates samples from a fixed length random input. I thus suppose you don't really use a generator with 1 fully connected network followed by up-convolutions but rather have several stages of convolutions followed by a fully connected layer and then up-convolutions ?\n- The choice of parametrizing the variance via a sigmoid output unit is somewhat unusual, was there a specific reason for this choice ?\n- footnote 1 contains errors: \"This allow to\" -> \"allows to\", \"few informations\" -> \"little information\". \"This force the network\" -> \"forces\"\n- Page 1 error: etc...\n- Page 4 error: \"operator should to learn\"\n\n[1] Markov Chain Monte Carlo and Variational Inference: Bridging the Gap, Tim Salimans and Diedrik P. Kingma and Max Welling, ICML 2015\n\n\n>>> Update <<<<\nCopied here from my response below: \n\nI believe the response of the authors clarifies all open issues. I strongly believe the paper should be accepted to the conference. The only remaining issue I have with the paper is that, as the authors acknowledge the architecture of the generator is likely highly sub-optimal and might hamper the performance of the method in the evaluation. This however does not at all subtract from any of the main points of the paper.\n\nI am thus keeping my score as a clear accept. I want to emphasize that I believe the paper should be published (just in case the review process results in some form of cut-off threshold that is high due to overall \"inflated\" review scores).\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning to Generate Samples from Noise through Infusion Training
["Florian Bordes", "Sina Honari", "Pascal Vincent"]
In this work, we investigate a novel training procedure to learn a generative model as the transition operator of a Markov chain, such that, when applied repeatedly on an unstructured random noise sample, it will denoise it into a sample that matches the target distribution from the training set. The novel training procedure to learn this progressive denoising operation involves sampling from a slightly different chain than the model chain used for generation in the absence of a denoising target. In the training chain we infuse information from the training target example that we would like the chains to reach with a high probability. The thus learned transition operator is able to produce quality and varied samples in a small number of steps. Experiments show competitive results compared to the samples generated with a basic Generative Adversarial Net.
["Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=BJAFbaolg
https://openreview.net/pdf?id=BJAFbaolg
https://openreview.net/forum?id=BJAFbaolg&noteId=H1PYkpbEx
Published as a conference paper at ICLR 2017LEARNING TO GENERATE SAMPLES FROM NOISETHROUGH INFUSION TRAININGFlorian Bordes, Sina Honari, Pascal VincentMontreal Institute for Learning Algorithms (MILA)D ́epartement d’Informatique et de Recherche Op ́erationnelleUniversit ́e de Montr ́ealMontr ́eal, Qu ́ebec, Canadaffirstname.lastname@umontreal.ca gABSTRACTIn this work, we investigate a novel training procedure to learn a generative modelas the transition operator of a Markov chain, such that, when applied repeatedly onan unstructured random noise sample, it will denoise it into a sample that matchesthe target distribution from the training set. The novel training procedure to learnthis progressive denoising operation involves sampling from a slightly differentchain than the model chain used for generation in the absence of a denoising tar-get. In the training chain we infuse information from the training target examplethat we would like the chains to reach with a high probability. The thus learnedtransition operator is able to produce quality and varied samples in a small numberof steps. Experiments show competitive results compared to the samples gener-ated with a basic Generative Adversarial Net.1 I NTRODUCTION AND MOTIVATIONTo go beyond the relatively simpler tasks of classification and regression, advancing our ability tolearn good generative models of high-dimensional data appears essential. There are many scenarioswhere one needs to efficiently produce good high-dimensional outputs where output dimensionshave unknown intricate statistical dependencies: from generating realistic images, segmentations,text, speech, keypoint or joint positions, etc..., possibly as an answer to the same, other, or multipleinput modalities. These are typically cases where there is not just one right answer but a variety ofequally valid ones following a non-trivial and unknown distribution. A fundamental ingredient forsuch scenarios is thus the ability to learn a good generative model from data, one from which wecan subsequently efficiently generate varied samples of high quality.Many approaches for learning to generate high dimensional samples have been and are still activelybeing investigated. These approaches can be roughly classified under the following broad categories:Ordered visible dimension sampling (van den Oord et al., 2016; Larochelle & Murray,2011). In this type of auto-regressive approach, output dimensions (or groups of condition-ally independent dimensions) are given an arbitrary fixed ordering, and each is sampledconditionally on the previous sampled ones. This strategy is often implemented using arecurrent network (LSTM or GRU). Desirable properties of this type of strategy are thatthe exact log likelihood can usually be computed tractably, and sampling is exact. Unde-sirable properties follow from the forced ordering, whose arbitrariness feels unsatisfactoryespecially for domains that do not have a natural ordering (e.g. images), and imposes forhigh-dimensional output a long sequential generation that can be slow.Undirected graphical models with multiple layers of latent variables. These make infer-ence, and thus learning, particularly hard and tend to be costly to sample from (Salakhutdi-nov & Hinton, 2009).Directed graphical models trained as variational autoencoders (V AE) (Kingma & Welling,2014; Rezende et al., 2014)Associate Fellow, Canadian Institute For Advanced Research (CIFAR)1Published as a conference paper at ICLR 2017Adversarially-trained generative networks. (GAN)(Goodfellow et al., 2014)Stochastic neural networks, i.e. networks with stochastic neurons, trained by an adaptedform of stochastic backpropagationGenerative uses of denoising autoencoders (Vincent et al., 2010) and their generalizationas Generative Stochastic Networks (Alain et al., 2016)Inverting a non-equilibrium thermodynamic slow diffusion process (Sohl-Dickstein et al.,2015)Continuous transformation of a distribution by invertible functions (Dinh et al. (2014), alsoused for variational inference in Rezende & Mohamed (2015))Several of these approaches are based on maximizing an explicit or implicit model log-likelihood ora lower bound of its log-likelihood, but some successful ones are not e.g. GANs. The approach wepropose here is based on the notion of “denoising” and thus takes its root in denoising autoencodersand the GSN type of approaches. It is also highly related to the non-equilibrium thermodynamicsinverse diffusion approach of Sohl-Dickstein et al. (2015). One key aspect that distinguishes thesetypes of methods from others listed above is that sample generation is achieved thanks to a learnedstochastic mapping from input space to input space, rather than from a latent-space to input-space.Specifically, in the present work, we propose to learn to generate high quality samples through aprocess of progressive ,stochastic, denoising , starting from a simple initial “noise” sample generatedin input space from a simple factorial distribution i.e. one that does not take into account anydependency or structure between dimensions. This, in effect, amounts to learning the transitionoperator of a Markov chain operating on input space. Starting from such an initial “noise” input,and repeatedly applying the operator for a small fixed number Tof steps, we aim to obtain a highquality resulting sample, effectively modeling the training data distribution. Our training procedureuses a novel “target-infusion” technique, designed to slightly bias model sampling to move towardsa specific data point during training, and thus provide inputs to denoise which are likely under themodel’s sample generation paths. By contrast with Sohl-Dickstein et al. (2015) which consists ininverting a slow and fixed diffusion process, our infusion chains make a few large jumps and followthe model distribution as the learning progresses.The rest of this paper is structured as follows: Section 2 formally defines the model and trainingprocedure. Section 3 discusses and contrasts our approach with the most related methods fromthe literature. Section 4 presents experiments that validate the approach. Section 5 concludes andproposes future work directions.2 P ROPOSED APPROACH2.1 S ETUPWe are given a finite data set Dcontainingnpoints in Rd, supposed drawn i.i.d from an unknowndistribution q. The data set Dis supposed split into training, validation and test subsets Dtrain,Dvalid,Dtest. We will denote qtrain theempirical distribution associated to the training set, and usexto denote observed samples from the data set. We are interested in learning the parameters of agenerative model pconceived as a Markov Chain from which we can efficiently sample. Note thatwe are interested in learning an operator that will display fast “burn-in” from the initial factorial“noise” distribution, but beyond the initial Tsteps we are not concerned about potential slow mixingor being stuck. We will first describe the sampling procedure used to sample from a trained model,before explaining our training procedure.2.2 G ENERATIVE MODEL SAMPLING PROCEDUREThe generative model pisdefined as the following sampling procedure:Using a simple factorial distribution p(0)(z(0)), draw an initial sample z(0)p(0), wherez(0)2Rd. Sincep(0)is factorial, the dcomponents of z(0)are independent: p0cannotmodel any dependency structure. z(0)can be pictured as essentially unstructured randomnoise.Repeatedly apply Ttimes a stochastic transition operator p(t)(z(t)jz(t1)), yielding a more“denoised” sample z(t)p(t)(z(t)jz(t1)), where all z(t)2Rd.2Published as a conference paper at ICLR 2017Figure 1: The model sampling chain . Each row shows a sample from p(z(0);:::;z(T))for a modelthat has been trained on MNIST digits. We see how the learned Markov transition operator progres-sively denoises an initial unstructured noise sample. We can also see that there remains ambiguity inthe early steps as to what digit this could become. This ambiguity gets resolved only in later steps.Even after a few initial steps, stochasticity could have made a chain move to a different final digitshape.Output z(T)as the final generated sample. Our generative model distri-bution is thus p(z(T)), the marginal associated to joint p(z(0);:::;z(T)) =p(0)(z(0))QTt=1p(t)(z(t)jz(t1)).In summary, samples from model pare generated, starting with an initial sample from a simpledistributionp(0), by taking the Tthsample along Markov chain z(0)!z(1)!z(2)!:::!z(T)whose transition operator is p(t)(z(t)jz(t1)). We will call this chain the model sampling chain .Figure 1 illustrates this sampling procedure using a model (i.e. transition operator) that was trainedon MNIST. Note that we impose no formal requirement that the chain converges to a stationarydistribution, as we simply read-out z(T)as the samples from our model p. The chain also needs notbe time-homogeneous, as highlighted by notation p(t)for the transitions.The set of parameters of modelpcomprise the parameters of p(0)and the parameters of tran-sition operator p(t)(z(t)jz(t1)). For tractability, learnability, and efficient sampling, these dis-tributions will be chosen factorial, i.e. p(0)(z(0)) =Qdi=1p(0)i(z(0)i)andp(t)(z(t)jz(t1)) =Qdi=1p(t)i(z(t)ijz(t1)). Note that the conditional distribution of an individual component i,p(t)i(z(t)ijz(t1))may however be multimodal, e.g. a mixture in which case p(t)(z(t)jz(t1))wouldbe a product of independent mixtures (conditioned on z(t1)), one per dimension. In our exper-iments, we will take the p(t)(z(t)jz(t1))to be simple diagonal Gaussian yielding a Deep LatentGaussian Model (DLGM) as in Rezende et al. (2014).2.3 I NFUSION TRAINING PROCEDUREWe want to train the parameters of model psuch that samples from Dtrain are likely of being gener-ated under the model sampling chain . Let(0)be the parameters of p(0)and let(t)be the parametersofp(t)(z(t)jz(t1)). Note that parameters (t)fort>0can straightforwardly be shared across timesteps, which we will be doing in practice. Having committed to using (conditionally) factorial dis-tributions for our p(0)(z(0))andp(t)(z(t)jz(t1)), that are both easy to learn and cheap to samplefrom, let us first consider the following greedy stagewise procedure. We can easily learn p(0)i(z(0))to model the marginal distribution of each component xiof the input, by training it by gradientdescent on a maximum likelihood objective, i.e.(0)= arg maxExqtrainhlogp(0)(x;)i(1)This gives us a first, very crude unstructured (factorial) model of q.3Published as a conference paper at ICLR 2017Having learned this p(0), we might be tempted to then greedily learn the next stage p(1)ofthe chain in a similar fashion, after drawing samples z(0)p(0)in an attempt to learn to“denoise” the sampled z(0)intox. Yet the corresponding following training objective (1)=arg maxExqtrain;z(0)p(0)logp(1)(xjz(0);)makes no sense: xandz(0)are sampled inde-pendently of each other so z(0)contains no information about x, hencep(1)(xjz(0)) =p(1)(x). Somaximizing this second objective becomes essentially the same as what we did when learning p(0).We would learn nothing more. It is essential, if we hope to learn a useful conditional distributionp(1)(xjz(0))that it be trained on particular z(0)containing some information about x. In otherwords, we should not take our training inputs to be samples from p(0)but from a slightly differentdistribution, biased towards containing some information about x. Let us call it q(0)(z(0)jx). Anatural choice for it, if it were possible, would be to take q(0)(z(0)jx) =p(z(0)jz(T)=x)but thisis an intractable inference, as all intermediate z(t)between z(0)andz(T)are effectively latent statesthat we would need to marginalize over. Using a workaround such as a variational or MCMC ap-proach would be a usual fallback. Instead, let us focus on our initial intent of guiding a progressivestochastic denoising, and think if we can come up with a different way to construct q(0)(z(0)jx)andsimilarly for the next steps q(t)i(~z(t)ij~z(t1);x).Eventually, we expect a sequence of samples from Markov chain pto move from initial “noise”towards a specific example xfrom the training set rather than another one, primarily if a samplealong the chain “resembles” xto some degree. This means that the transition operator should learnto pick up a minor resemblance with an xin order to transition to something likely to be evenmore similar to x. In other words, we expect samples along a chain leading to xto both havehigh probability under the transition operator of the chain p(t)(z(t)jz(t1)),andto have some formof at least partial “resemblance” with xlikely to increase as we progress along the chain. Onehighly inefficient way to emulate such a chain of samples would be, for teach step t, to samplemany candidate samples from the transition operator (a conditionally factorial distribution) until wegenerate one that has some minimal “resemblance” to x(e.g. for a discrete space, this resemblancemeasure could be based on their Hamming distance). A qualitatively similar result can be obtainedat a negligible cost by sampling from a factorial distribution that is very close to the one given by thetransition operator, but very slightly biased towards producing something closer to x. Specifically,we can “infuse” a little of xinto our sample by choosing for each input dimension, whether wesample it from the distribution given for that dimension by the transition operator, or whether, witha small probability, we take the value of that dimension from x. Samples from this biased chain, inwhich we slightly “infuse” x, will provide us with the inputs of our input-target training pairs forthe transition operator. The target part of the training pairs is simply x.2.3.1 T HE INFUSION CHAINFormally we define an infusion chain ez(0)!ez(1)!:::!ez(T1)whose distributionq(ez(0);:::;ez(T1)jx)will be “close” to the sampling chain z(0)!z(1)!z(2)!:::!z(T1)of modelpin the sense that q(t)(~z(t)j~z(t1);x)will be close to p(t)(z(t)jz(t1)), but will at ev-ery step be slightly biased towards generating samples closer to target x, i.e. xgets progres-sively “infused” into the chain. This is achieved by defining q(0)i(ez(0)ijx)as a mixture betweenp(0)i(with a large mixture weight) and xi, a concentrated unimodal distribution around xi, suchas a Gaussian with small variance (with a small mixture weight)1. Formally q(0)i(~z(0)ijx) =(1(t))p(0)i(~z(0)i) +(t)xi(~z(0)i), where 1(t)and(t)are the mixture weights2. Inother words, when sampling a value for ~z(0)ifromq(0)ithere will be a small probability (0)to pick value close to xi(as sampled from xi) rather than sampling the value from p(0)i. Wecall(t)theinfusion rate . We define the transition operator of the infusion chain similarly as:q(t)i(~z(t)ij~z(t1);x) = (1(t))p(t)i(~z(t)ij~ z(t1)) +(t)xi(~z(t)i).1Note thatxidoes not denote a Dirac-Delta but a Gaussian with small sigma.2In all experiments, we use an increasing schedule (t)=(t1)+!with(0)and!constant. This allowsto build our chain such that in the first steps, we give little information about the target and in the last steps wegive more informations about the target. This forces the network to have less confidence (greater incertitude)at the beginning of the chain and more confidence on the convergence point at the end of the chain.4Published as a conference paper at ICLR 2017Figure 2: Training infusion chains, infused with target x= . This figure shows the evolutionof chainq(z(0);:::;z(30)jx)as training on MNIST progresses. Top row is after network randomweight initialization. Second row is after 1 training epochs, third after 2 training epochs, and so on.Each of these images were at a time provided as the input part of the ( input ,target ) training pairs forthe network. The network was trained to denoise all of them into target 3. We see that as trainingprogresses, the model has learned to pick up the cues provided by target infusion, to move towardsthat target. Note also that a single denoising step, even with target infusion, is not sufficient for thenetwork to produce a sharp well identified digit.2.3.2 D ENOISING -BASED INFUSION TRAINING PROCEDUREFor all x2Dtrain:Sample from the infusion chain ~ z= (~z(0);:::; ~z(T1))q(~z(0);:::; ~z(T1)jx).precisely so: ~z0q(0)(~z(0)jx):::~z(t)q(t)(~z(t)j~z(t1);x):::Perform a gradient step so that plearns to “denoise” every ~z(t)intox.(t) (t)+(t)@logp(t)(xj~z(t1);(t))@(t)where(t)is a scalar learning rate.3As illustrated in Figure 2, the distribution of samples from the infusion chain evolves as trainingprogresses, since this chain remains close to the model sampling chain.2.4 S TOCHASTIC LOG LIKELIHOOD ESTIMATIONThe exact log-likelihood of the generative model implied by our model pis intractable. The log-probability of an example xcan however be expressed using proposal distribution qas:logp(x) = log Eq(ezjx)p(~ z;x)q(ezjx)(2)Using Jensen’s inequality we can thus derive the following lower bound:logp(x)Eq(ezjx)[logp(~ z;x)logq(ezjx)] (3)where logp(~ z;x) = logp(0)(~z(0)) +PT1t=1logp(t)(~z(t)j~z(t1))+ logp(T)(xj~z(T1))andlogq(~ zjx) = logq(0)(~z(0)jx) +PT1t=1logq(t)(~z(t)j~z(t1);x).3Since we will be sharing parameters between the p(t), in order for the expected larger error gradients onthe earlier transitions not to dominate the parameter updates over the later transitions we used an increasingschedule(t)=0tTfort2f1;:::;Tg.5Published as a conference paper at ICLR 2017A stochastic estimation can easily be obtained by replacing the expectation by an average using afew samples from q(ezjx). We can thus compute a lower bound estimate of the average log likelihoodover training, validation and test data.Similarly in addition to the lower-bound based on Eq.3 we can use the same few samples fromq(ezjx)to get an importance-sampling estimate of the likelihood based on Eq. 24.2.4.1 L OWER -BOUND -BASED INFUSION TRAINING PROCEDURESince we have derived a lower bound on the likelihood, we can alternatively choose to optimize thisstochastic lower-bound directly during training. This alternative lower-bound based infusion train-ing procedure differs only slightly from the denoising-based infusion training procedure by using~z(t)as a training target at step t(performing a gradient step to increase logp(t)(~z(t)j~z(t1);(t)))whereas denoising training always uses xas its target (performing a gradient step to increaselogp(t)(xj~z(t1);(t))). Note that the same reparametrization trick as used in Variational Auto-encoders (Kingma & Welling, 2014) can be used here to backpropagate through the chain’s Gaussiansampling.3 R ELATIONSHIP TO PREVIOUSLY PROPOSED APPROACHES3.1 M ARKOV CHAIN MONTE CARLO FOR ENERGY -BASED MODELSGenerating samples as a repeated application of a Markov transition operator that operates on inputspace is at the heart of Markov Chain Monte Carlo (MCMC) methods. They allow sampling from anenergy-model, where one can efficiently compute the energy or unnormalized negated log probabil-ity (or density) at any point. The transition operator is then derived from an explicit energy functionsuch that the Markov chain prescribed by a specific MCMC method is guaranteed to converge tothe distribution defined by that energy function, as the equilibrium distribution of the chain. MCMCtechniques have thus been used to obtain samples from the energy model, in the process of learningto adjust its parameters.By contrast here we do not learn an explicit energy function, but rather learn directly a parameterizedtransition operator, and define an implicit model distribution based on the result of running theMarkov chain.3.2 V ARIATIONAL AUTO -ENCODERSVariational auto-encoders (V AE) (Kingma & Welling, 2014; Rezende et al., 2014) also start froman unstructured (independent) noise sample and non-linearly transform this into a distribution thatmatches the training data. One difference with our approach is that the V AE typically maps from alower-dimensional space to the observation space. By contrast we learn a stochastic transition oper-ator from input space to input space that we repeat for Tsteps. Another key difference, is that theV AE learns a complex heavily parameterized approximate posterior proposal qwhereas our infusionbasedqcan be understood as a simple heuristic proposal distribution based on p. Importantly thespecific heuristic we use to infuse xintoqmakes sense precisely because our operator is a map frominput space to input space, and couldn’t be readily applied otherwise. The generative network inRezende et al. (2014) is a Deep Latent Gaussian Model (DLGM) just as ours. But their approximateposteriorqis taken to be factorial, including across all layers of the DLGM, whereas our infusionbasedqinvolves an ordered sampling of the layers, as we sample from q(t)(~z(t)j~z(t1);x).More recent proposals involve sophisticated approaches to sample from better approximate poste-riors, as the work of Salimans et al. (2015) in which Hamiltonian Monte Carlo is combined withvariational inference, which looks very promising, though computationally expensive, and Rezende& Mohamed (2015) that generalizes the use of normalizing flows to obtain a better approximateposterior.4Specifically, the two estimates (lower-bound and IS) start by collecting ksamples from q(ezjx)and com-puting for each the corresponding `= logp(~ z;x)logq(ezjx). The lower-bound estimate is then obtainedby averaging the resulting `1;:::` k, whereas the IS estimate is obtained by taking the logof the averagede`1;:::;e`k(in a numerical stable manner as logsumexp( `1;:::;` k)logk).6Published as a conference paper at ICLR 20173.3 S AMPLING FROM AUTOENCODERS AND GENERATIVE STOCHASTIC NETWORKSEarlier works that propose to directly learn a transition operator resulted from research to turn au-toencoder variants that have a stochastic component, in particular denoising autoencoders (Vincentet al., 2010), into generative models that one can sample from. This development is natural, sincea stochastic auto-encoder isa stochastic transition operator form input space to input space. Gen-erative Stochastic Networks (GSN) (Alain et al., 2016) generalized insights from earlier stochasticautoencoder sampling heuristics (Rifai et al., 2012) into a more formal and general framework.These previous works on generative uses of autoencoders and GSNs attempt to learn a chain whoseequilibrium distribution will fit the training data. Because autoencoders and the chain are typicallystarted from or very close to training data points, they are concerned with the chain mixing quicklybetween modes. By contrast our model chain is always restarted from unstructured noise, and isnot required to reach or even have an equilibrium distribution. Our concern is only what happensduring theT“burn-in” initial steps, and to make sure that it transforms the initial factorial noisedistribution into something that best fits the training data distribution. There are no mixing concernsbeyond those Tinitial steps.A related aspect and limitation of previous denoising autoencoder and GSN approaches is that thesewere mainly “local” around training samples: the stochastic operator explored space starting fromand primarily centered around training examples, and learned based on inputs in these parts of spaceonly. Spurious modes in the generated samples might result from large unexplored parts of spacethat one might encounter while running a long chain.3.4 R EVERSING A DIFFUSION PROCESS IN NON -EQUILIBRIUM THERMODYNAMICSThe approach of Sohl-Dickstein et al. (2015) is probably the closest to the approach we develop here.Both share a similar model sampling chain that starts from unstructured factorial noise. Neitherare concerned about an equilibrium distribution . They are however quite different in several keyaspects: Sohl-Dickstein et al. (2015) proceed to invert an explicit diffusion process that starts froma training set example and very slowly destroys its structure to become this random noise, they thenlearn to reverse this process i.e. an inverse diffusion . To maintain the theoretical argument thattheexact reverse process has the same distributional form (e.g. p(x(t1)jx(t))andp(x(t)jx(t1))both factorial Gaussians), the diffusion has to be infinitesimal by construction, hence the proposedapproaches uses chains with thousands of tiny steps. Instead, our aim is to learn an operator that canyield a high quality sample efficiently using only a small number Tof larger steps. Also our infusiontraining does not posit a fixed a priori diffusion process that we would learn to reverse. And whilethe distribution of diffusion chain samples of Sohl-Dickstein et al. (2015) is fixed and remains thesame all along the training, the distribution of our infusion chain samples closely follow the modelchain as our model learns. Our proposed infusion sampling technique thus adapts to the changinggenerative model distribution as the learning progresses.Drawing on both Sohl-Dickstein et al. (2015) and the walkback procedure introduced for GSN inAlain et al. (2016), a variational variant of the walkback algorithm was investigated by Goyal et al.(2017) at the same time as our work. It can be understood as a different approach to learning aMarkov transition operator, in which a “heating” diffusion operator is seen as a variational approxi-mate posterior to the forward “cooling” sampling operator with the exact same form and parameters,except for a different temperature.4 E XPERIMENTSWe trained models on several datasets with real-valued examples. We used as prior distributionp(0)a factorial Gaussian whose parameters were set to be the mean and variance for each pixelthrough the training set. Similarly, our models for the transition operators are factorial Gaussians.Their mean and elementwise variance is produced as the output of a neural network that receivesthe previous z(t1)as its input, i.e. p(t)(z(t)ijz(t1)) =N(i(z(t1));2i(z(t1)))whereand2are computed as output vectors of a neural network. We trained such a model using our infusiontraining procedure on MNIST (LeCun & Cortes, 1998), Toronto Face Database (Susskind et al.,2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and CelebA (Liu et al., 2015). For all datasets, theonly preprocessing we did was to scale the integer pixel values down to range [0,1]. The network7Published as a conference paper at ICLR 2017Table 3: Inception score (with standard error) of 50 000 samples generated by models trained onCIFAR-10. We use the models in Salimans et al. (2016) as baseline. ’SP’ corresponds to the bestmodel described by Salimans et al. (2016) trained in a semi-supervised fashion. ’-L’ correspondsto the same model after removing the label in the training process (unsupervised way), ’-MBF’corresponds to a supervised training without minibatch features.Model Real data SP -L -MBF Infusion trainingInception score 11.24.12 8.09.07 4.36.06 3.87.03 4.62.06trained on MNIST and TFD is a MLP composed of two fully connected layers with 1200 unitsusing batch-normalization (Ioffe & Szegedy, 2015)5. The network trained on CIFAR-10 is basedon the same generator as the GANs of Salimans et al. (2016), i.e. one fully connected layer followedby three transposed convolutions. CelebA was trained with the previous network where we addedanother transposed convolution. We use rectifier linear units (Glorot et al., 2011) on each layerinside the networks. Each of those networks have two distinct final layers with a number of unitscorresponding to the image size. They use sigmoid outputs, one that predict the mean and the secondthat predict a variance scaled by a scalar (In our case we chose = 0:1) and we add an epsilon= 1e4to avoid an excessively small variance. For each experiment, we trained the networkon 15 steps of denoising with an increasing infusion rate of 1% ( != 0:01;(0)= 0), except onCIFAR-10 where we use an increasing infusion rate of 2% ( != 0:02;(0)= 0) on 20 steps.4.1 N UMERICAL RESULTSSince we can’t compute the exact log-likelihood, the evaluation of our model is not straightforward.However we use the lower bound estimator derived in Section 2.4 to evaluate our model during train-ing and prevent overfitting (see Figure 3). Since most previous published results on non-likelihoodbased models (such as GANs) used a Parzen-window-based estimator (Breuleux et al., 2011), we useit as our first comparison tool, even if it can be misleading (Lucas Theis & Bethge, 2016). Resultsare shown in Table 1, we use 10 000 generated samples and = 0:17. To get a better estimate ofthe log-likelihood, we then computed both the stochastic lower bound and the importance samplingestimate (IS) given in Section 2.4. For the IS estimate in our MNIST-trained model, we used 20000 intermediates samples. In Table 2 we compare our model with the recent Annealed ImportanceSampling results (Wu et al., 2016). Note that following their procedure we add an uniform noiseof 1/256 to the (scaled) test point before evaluation to avoid overevaluating models that might haveoverfitted on the 8 bit quantization of pixel values. Another comparison tool that we used is theInception score as in Salimans et al. (2016) which was developed for natural images and is thusmost relevant for CIFAR-10. Since Salimans et al. (2016) used a GAN trained in a semi-supervisedway with some tricks, the comparison with our unsupervised trained model isn’t straightforward.However, we can see in Table 3 that our model outperforms the traditional GAN trained withoutlabeled data.4.2 S AMPLE GENERATIONAnother common qualitative way to evaluate generative models is to look at the quality of the sam-ples generated by the model. In Figure 4 we show various samples on each of the datasets we used.In order to get sharper images, we use at sampling time more denoising steps than in the trainingtime (In the MNIST case we use 30 denoising steps for sampling with a model trained on 15 denois-ing steps). To make sure that our network didn’t learn to copy the training set, we show in the lastcolumn the nearest training-set neighbor to the samples in the next-to last column. We can see thatour training method allow to generate very sharp and accurate samples on various dataset.5We don’t share batch norm parameters across the network, i.e for each time step we have different param-eters and independent batch statistics.8Published as a conference paper at ICLR 2017Figure 3: Training curves: lower bounds on aver-age log-likelihood on MNIST as infusion trainingprogresses. We also show the lower bounds esti-mated with the Parzen estimation method.Model TestDBM (Bengio et al., 2013) 1382SCAE (Bengio et al., 2013) 1211:6GSN (Bengio et al., 2014) 2141:1Diffusion (Sohl-Dicksteinet al., 2015)2201:9GANs (Goodfellow et al.) 2252GMMN + AE (Li et al.) 2822Infusion training (Our) 3121:7Table 1: Parzen-window-based estimator oflower bound on average test log-likelihoodon MNIST (in nats).Table 2: Log-likelihood (in nats) estimated by AIS on MNIST test and training sets as reported inWu et al. (2016) and the log likelihood estimates of our model obtained by infusion training (lastthree lines). Our initial model uses a Gaussian output with diagonal covariance, and we appliedboth our lower bound and importance sampling (IS) log-likelihood estimates to it. Since Wu et al.(2016) used only an isotropic output observation model, in order to be comparable to them, we alsoevaluated our model after replacing the output by an isotropic Gaussian output (same fixed variancefor all pixels). Average and standard deviation over 10 repetitions of the evaluation are provided.Note that AIS might provide a higher evaluation of likelihood than our current IS estimate, but thisis left for future work.Model Test log-likelihood (1000ex) Train log-likelihood (100ex)V AE-50 (AIS) 991:4356:477 1272:5866:759GAN-50 (AIS) 627:2978:813 620:49831:012GMMN-50 (AIS) 593:4728:591 571:80330:864V AE-10 (AIS) 705:3757:411 780:19619:147GAN-10 (AIS) 328:7725:538 318:94822:544GMMN-10 (AIS) 346:6795:860 345:17619:893Infusion training + isotropic(IS estimate)413:2970:460 450:6951:617Infusion training (ISestimate)1836:270:551 1837:5601:074Infusion training (lowerbound)1350:5980:079 1230:3050:5329Published as a conference paper at ICLR 2017(a) MNIST (b) Toronto Face Dataset(c) CIFAR-10 (d) CelebAFigure 4: Mean predictions by our models on 4 different datasets. The rightmost column shows thenearest training example to the samples in the next-to last column.10Published as a conference paper at ICLR 2017Figure 5: Inpainting on CelebA dataset. In each row, from left to right: an image form the testset; the same image with bottom half randomly sampled from our factorial prior. Then several endsamples from our sampling chain in which the top part is clamped. The generated samples showthat our model is able to generate a varied distribution of coherent face completions.4.3 I NPAINTINGAnother method to evaluate a generative model is inpainting . It consists of providing only a partialimage from the test set and letting the model generate the missing part. In one experiment, weprovide only the top half of CelebA test set images and clamp that top half throughout the samplingchain. We restart sampling from our model several times, to see the variety in the distribution of thebottom part it generates. Figure 5 shows that the model is able to generate a varied set of bottomhalves, all consistent with the same top half, displaying different type of smiles and expression. Wealso see that the generated bottom halves transfer some information about the provided top half ofthe images (such as pose and more or less coherent hair cut).5 C ONCLUSION AND FUTURE WORKWe presented a new training procedure that allows a neural network to learn a transition operatorof a Markov chain. Compared to the previously proposed method of Sohl-Dickstein et al. (2015)based on inverting a slow diffusion process, we showed empirically that infusion training requiresfar fewer denoising steps, and appears to provide more accurate models. Currently, many success-ful generative models, judged on sample quality, are based on GAN architectures. However theserequire to use two different networks, a generator and a discriminator, whose balance is reputed del-icate to adjust, which can be source of instability during training. Our method avoids this problemby using only a single network and a simpler training objective.Denoising-based infusion training optimizes a heuristic surrogate loss for which we cannot (yet)provide theoretical guarantees, but we empirically verified that it results in increasing log-likelihoodestimates. On the other hand the lower-bound-based infusion training procedure does maximize anexplicit variational lower-bound on the log-likelihood. While we have run most of our experimentswith the former, we obtained similar results on the few problems we tried with lower-bound-basedinfusion training.Future work shall further investigate the relationship and quantify the compromises achieved withrespect to other Markov Chain methods including Sohl-Dickstein et al. (2015); Salimans et al. (2015)11Published as a conference paper at ICLR 2017and also to powerful inference methods such as Rezende & Mohamed (2015). As future work, wealso plan to investigate the use of more sophisticated neural net generators, similar to DCGAN’s(Radford et al., 2016) and to extend the approach to a conditional generator applicable to structuredoutput problems.ACKNOWLEDGMENTSWe would like to thank the developers of Theano (Theano Development Team, 2016) for making thislibrary available to build on, Compute Canada and Nvidia for their computation resources, NSERCand Ubisoft for their financial support, and three ICLR anonymous reviewers for helping us improveour paper.REFERENCESGuillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric Thibodeau-Laufer, Saizheng Zhang,and Pascal Vincent. GSNs: generative stochastic networks. Information and Inference , 2016. doi:10.1093/imaiai/iaw003.Yoshua Bengio, Gr ́egoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen-tations. In Proceedings of the 30th International Conference on Machine Learning (ICML 2013) ,2013.Yoshua Bengio, Eric Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochasticnetworks trainable by backprop. In Proceedings of the 31st International Conference on MachineLearning (ICML 2014) , pp. 226–234, 2014.Olivier Breuleux, Yoshua Bengio, and Pascal Vincent. Quickly generating representative samplesfrom an rbm-derived process. Neural Computation , 23(8):2058–2073, 2011.Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti-mation. arXiv preprint arXiv:1410.8516 , 2014.Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. InAistats , volume 15, pp. 275, 2011.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling,C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro-cessing Systems 27 , pp. 2672–2680. Curran Associates, Inc., 2014.Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, and Yoshua Bengio. The variational walkbackalgorithm. Technical report, Universit ́e de Montr ́eal, 2017. URL https://openreview.net/forum?id=rkpdnIqlx . On openreview.net.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training byreducing internal covariate shift. Proceedings of The 32nd International Conference on MachineLearning , pp. 448–456, 2015.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2ndInternational Conference on Learning Representations (ICLR 2014) , 2014.Alex. Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images.Master’s thesis, Department of Computer Science, University of Toronto , 2009.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna-tional Conference on Machine Learning (ICML 2015) , pp. 1718–1727, 2015.12Published as a conference paper at ICLR 2017Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild.InProceedings of International Conference on Computer Vision (ICCV 2015) , December 2015.A ̈aron van den Oord Lucas Theis and Matthias Bethge. A note on the evaluation of generativemodels. In Proceedings of the 4th International Conference on Learning Representations (ICLR2016) , 2016.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. International Conference on Learning Represen-tations , 2016.Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedingsof the 32nd International Conference on Machine Learning (ICML 2015) , pp. 1530–1538, 2015.Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation andapproximate inference in deep generative models. In Proceedings of the 31th International Con-ference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014 , pp. 1278–1286,2014. URL http://jmlr.org/proceedings/papers/v32/rezende14.html .Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sam-pling contractive auto-encoders. In Proceedings of the 29th International Conference on MachineLearning (ICML 2012) , 2012.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1,pp. 3, 2009.Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variationalinference: Bridging the gap. In Proceedings of The 32nd International Conference on MachineLearning , pp. 1218–1226, 2015.Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. CoRR , abs/1606.03498, 2016.Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsuper-vised Learning using Nonequilibrium Thermodynamics. In Proceedings of the 32nd InternationalConference on Machine Learning , volume 37 of JMLR Proceedings , pp. 2256–2265. JMLR.org,2015.Josh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Depart-ment of Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep , 3, 2010.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, may 2016.A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.InProceedings of the 33nd International Conference on Machine Learning (ICML 2016) , pp.1747–1756, 2016.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked denoising autoencoders: Learning useful representations in a deep network with a localdenoising criterion. Journal of Machine Learning Research , 11(Dec):3371–3408, 2010.Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysisof decoder-based generative models. CoRR , abs/1611.04273, 2016.13Published as a conference paper at ICLR 2017A D ETAILS ON THE EXPERIMENTSA.1 MNIST EXPERIMENTSWe show the impact of the infusion rate (t)=(t1)+!for different numbers of training stepson the lower bound estimate of log-likelihood on the Validation set of MNIST in Figure 6. We alsoshow the quality of generated samples and the lower bound evaluated on the test set in Table 4. Eachexperiment in Table 4 uses the corresponding models of Figure 6 that obtained the best lower boundvalue on the validation set. We use the same network architecture as described in Section 4, i.e twofully connected layers with Relu activations composed of 1200 units followed by two distinct fullyconnected layers composed of 784 units, one that predicts the means, the other one that predictsthe variances. Each mean and variance is associated with one pixel. All of the the parameters ofthe model are shared across different steps except for the batch norm parameters. During training,we use the batch statistics of the current mini-batch in order to evaluate our model on the train andvalidation sets. At test time (Table 4), we first compute the batch statistics over the entire train setfor each step and then use the computed statistics to evaluate our model on the test test.We did some experiments to evaluate the impact of or!in(t)=(t1)+!. Figure 6 showsthat as the number of steps increases, the optimal value for infusion rate decreases. Therefore, if wewant to use many steps, we should have a small infusion rate. These conclusions are valid for bothincreasing and constant infusion rate. For example, the optimal for a constant infusion rate, inFigure 6e with 10 steps is 0.08 and in Figure 6f with 15 steps is 0.06. If the number of steps is notenough or the infusion rate is too small, the network will not be able to learn the target distributionas shown in the first rows of all subsection in Table 4.In order to show the impact of having a constant versus an increasing infusion rate, we show in Fig-ure 7 the samples created by infused and sampling chains. We observe that having a small infusionrate over many steps ensures a slow blending of the model distribution into the target distribution.In Table 4, we can see high lower bound values on the test set with few steps even if the modelcan’t generate samples that are qualitatively satisfying. These results indicate that we can’t rely onthe lower bound as the only evaluation metric and this metric alone does not necessarily indicatethe suitability of our model to generated good samples. However, it is still a useful tool to preventoverfitting (the networks in Figure 6e and 6f overfit when the infusion rate becomes too high).Concerning the samples quality, we observe that having a small infusion rate over an adequatenumber of steps leads to better samples.A.2 I NFUSION AND MODEL SAMPLING CHAINS ON NATURAL IMAGES DATASETSIn order to show the behavior of our model trained by Infusion on more complex datasets, weshow in Figure 8 chains on CIFAR-10 dataset and in Figure 9 chains on CelebA dataset. In eachFigure, the first sub-figure shows the chains infused by some test examples and the second sub-figure shows the model sampling chains. In the experiment on CIFAR-10, we use an increasingschedule(t)=(t1)+ 0:02with(0)= 0and 20 infusion steps (this corresponds to the trainingparameters). In the experiment on CelebA, we use an increasing schedule (t)=(t1)+ 0:01with(0)= 0and 15 infusion steps.14Published as a conference paper at ICLR 2017(a) Networks trained with 1 infusion step. Each in-fusion rate in the figure corresponds to (0). Sincewe have only one step, we have != 0.(b) Networks trained with 5 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(c) Networks trained with 10 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(d) Networks trained with 15 infusion steps. Eachinfusion rate corresponds to !. We set(0)= 0.(e) Networks trained with 10 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infusionrate in the figure corresponds to different values for(0).(f) Networks trained with 15 infusion steps. In thisexperiment we use the same infusion rate for eachtime step such that 8t(t)=(0). Each infu-sion rate in the figure corresponds to different values(0).Figure 6: Training curves on MNIST showing the log likelihood lower bound (nats) for differentinfusion rate schedules and different number of steps. We use an increasing schedule (t)=(t1)+!. In each sub-figure for a fixed number of steps, we show the lower bound for different infusionrates.15Published as a conference paper at ICLR 2017Table 4: Infusion rate impact on the lower bound log-likelihood (test set) and the samples generatedby a network trained with different number of steps. Each sub-table corresponds to a fixed numberof steps. Each row corresponds to a different infusion rate, where we show its lower bound and alsoits corresponding generated samples from the trained model. Note that for images, we show themean of the Gaussian distributions instead of the true samples. As the number of steps increases, theoptimal infusion rate decreases. Higher number of steps contributes to better qualitative samples, asthe best samples can be seen with 15 steps using = 0:01.(a) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 1 step.infusion rate Lower bound (test) Means of the model0.0 824.340.05 885.350.1 967.250.15 1063.270.2 1115.150.25 1158.810:3 1209:390.4 1209.160.5 1132.050.6 1008.600.7 854.400.9 -161.37(b) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 5 stepsinfusion rate Lower bound (test)0.0 823.810.01 910.190.03 1142.430.05 1303.190.08 1406.380:1 1448:660.15 1397.410.2 1262.57(c) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 10 stepsinfusion rate Lower bound (test)0.0 824.420.01 1254.070:02 1389:120.03 1366:680.04 1223.470.05 1057.430.05 846.730.07 658.66(d) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networktrained with 15 stepsinfusion rate Lower bound (test)0.0 824.500:01 1351:030.02 1066.600.03 609.100.04 876.930.05 -479.690.06 -941.7816Published as a conference paper at ICLR 2017(a) Chains infused with MNIST test set samplesby a constant rate ( (0)= 0:05; != 0) in 15steps.(b) Model sampling chains on MNIST using a net-work trained with a constant infusion rate ( (0)=0:05; != 0) in 15 steps.(c) Chains infused with MNIST test set samplesby an increasing rate ( (0)= 0:0; != 0:01) in15 steps.(d) Model sampling chains on MNIST using anetwork trained with an increasing infusion rate((0)= 0:0; != 0:01) in 15 steps.Figure 7: Comparing samples of constant infusion rate versus an increasing infusion rate on infusedand generated chains. The models are trained on MNIST in 15 steps. Note that having an increasinginfusion rate with a small value for !allows a slow convergence to the target distribution. In contrasthaving a constant infusion rate leads to a fast convergence to a specific point. Increasing infusionrate leads to more visually appealing samples. We observe that having an increasing infusion rateover many steps ensures a slow blending of the model distribution into the target distribution.17Published as a conference paper at ICLR 2017(a) Infusion chains on CIFAR-10. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CIFAR-10Figure 8: Infusion chains (Sub-Figure 8a) and model sampling chains (Sub-Figure 8b) on CIFAR-10.18Published as a conference paper at ICLR 2017(a) Infusion chains on CelebA. Last column corresponds to the target used to infuse the chain.(b) Model sampling chains on CelebAFigure 9: Infusion chains (Sub-Figure 9a) and model sampling chains (Sub-Figure 9b) on CelebA.19
S1Jpha-Vl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "", "rating": "7: Good paper, accept", "review": "This paper presents a valuable new collection of video game benchmarks, in an extendable framework, and establishes initial baselines on a few of them.\n\nReward structures: for how many of the possible games have you implemented the means to extract scores and incremental reward structures? From the github repo it looks like about 10 -- do you plan to add more, and when?\n\n\u201crivalry\u201d training: this is one of the weaker components of the paper, and it should probably be emphasised less. On this topic, there is a vast body of (uncited) multi-agent literature, it is a well-studied problem setup (more so than RL itself). To avoid controversy, I would recommend not claiming any novel contribution on the topic (I don\u2019t think that you really invented \u201ca new method to train an agent by enabling it to train against several opponents\u201d nor \u201ca new benchmarking technique for agents evaluation, by enabling them to compete against each other, rather than playing against the in-game AI\u201d). Instead, just explain that you have established single-agent and multi-agent baselines for your new benchmark suite.\n\nYour definition of Q-function (\u201cpredicts the score at the end of the game given the current state and selected action\u201d) is incorrect. It should read something like: it estimates the cumulative discounted reward that can be obtained from state s, starting with action a (and then following a certain policy).\n\nMinor:\n* Eq (1): the Q-net inside the max() is the target network, with different parameters theta\u2019\n* the Du et al. reference is missing the year\n* some of the other references should point at the corresponding published papers instead of the arxiv versions", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=S1Jpha-Vl
PLAYING SNES INTHE RETRO LEARNING ENVIRONMENTNadav Bhonker*, Shai Rozenberg* and Itay HubaraDepartment of Electrical EngineeringTechnion, Israel Institute of Technology(*) indicates equal contributionfnadavbh,shairoz g@tx.technion.ac.ilitayhubara@gmail.comABSTRACTMastering a video game requires skill, tactics and strategy. While these attributesmay be acquired naturally by human players, teaching them to a computer pro-gram is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were intro-duced, aiming to learn how to perform human tasks such as playing video games.As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) hasbecome a commonly used benchmark environment allowing algorithms to train onvarious Atari 2600 games. In many games the state-of-the-art algorithms outper-form humans. In this paper we introduce a new learning environment, the RetroLearning Environment — RLE, that can run games on the Super Nintendo Enter-tainment System (SNES), Sega Genesis and several other gaming consoles. Theenvironment is expandable, allowing for more video games and consoles to beeasily added to the environment, while maintaining the same interface as ALE.Moreover, RLE is compatible with Python and Torch. SNES games pose a signif-icant challenge to current algorithms due to their higher level of complexity andversatility.1 I NTRODUCTIONControlling artificial agents using only raw high-dimensional input data such as image or sound isa difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs inthe field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartzet al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world isusually either expensive or not feasible, as the real world is far too complex for the agent to perceive.Therefore in practice the interaction is simulated by a virtual environment which receives feedbackon a decision made by the algorithm. Traditionally, games were used as a RL environment, datingback to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro,1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and taskswhich are highly correlated with real-world problems. For example, an agent that masters a racinggame, by observing a simulated driver’s view screen as input, may be usefull for the development ofan autonomous driver. For high-dimensional input, the leading benchmark is the Arcade LearningEnvironment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-form, allowing a controlled experiment setup for algorithm evaluation and comparison. The mainchallenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-ing a score higher than an expert human player) without providing the algorithm any game-specificinformation (i.e., using the same input available to a human - the game screen and score). A keywork to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made abreakthrough in the field of Deep Reinforcement Learning by achieving human level performanceon 29 out of 49 games. In this work we present a new environment — the Retro Learning Environ-ment (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as wellas more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment1System (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only onewas able to outperform an expert human player. As an additional feature, RLE supports research ofmulti-agent reinforcement learning (MARL) tasks (Bus ̧oniu et al., 2010). We utilize this feature bytraining and evaluating the agents against each other, rather than against a pre-configured in-gameAI. We conducted several experiments with this new feature and discovered that agents tend to learnhow to overcome their current opponent rather than generalize the game being played. However, ifan agent is trained against an ensemble of different opponents, its robustness increases. The maincontributions of the paper are as follows:Introducing a novel RL environment with significant challenges and an easy agent evalu-ation technique (enabling agents to compete against each other) which could lead to newand more advanced RL algorithms.A new method to train an agent by enabling it to train against several opponents, makingthe final policy more robust.Encapsulating several different challenges to a single RL environment.2 R ELATED WORK2.1 A RCADE LEARNING ENVIRONMENTThe Arcade Learning Environment is a software framework designed for the development of RLalgorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms toselect an action and receive the Atari screen and a reward in every step. The action is the equivalentto a human’s joystick button combination and the reward is the difference between the scores attime stamptandt1. The diversity of games for Atari provides a solid benchmark since differentgames have significantly different goals. Atari 2600 has over 500 games, currently over 70 of themare implemented in ALE and are commonly used for algorithm comparison.2.2 I NFINITE MARIOInfinite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels arerandomly generated. On these levels the Mario AI Competition was held. During the competition,several algorithms were trained on Infinite Mario and their performances were measured in terms ofthe number of stages completed. As opposed to ALE, training is not based on the raw screen databut rather on an indication of Mario’s (the player’s) location and objects in its surrounding. Thisenvironment no longer poses a challenge for state of the art algorithms. Its main shortcoming liein the fact that it provides only a single game to be learnt. Additionally, the environment provideshand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the useof planning algorithms that highly outperform any learning based algorithm.2.3 O PENAI G YMThe OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creatingan interface between RL environments and algorithms for evaluation and comparison purposes.OpenAI Gym is currently very popular due to the large number of environments supported by it.For example ALE, Go, MouintainCar andVizDoom (Zhu et al., 2016), an environment for thelearning of the 3D first-person-shooter game ”Doom”. OpenAI Gym’s recent appearance and wideusage indicates the growing interest and research done in the field of RL.2.4 O PENAI U NIVERSEUniverse (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms cantrain on over a thousand games. Universe includes very advanced games such as GTA V , Portal aswell as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn’t run the games locally andrequires a VNC interface to a server that runs the games. This leads to a lower frame rate and thuslonger training times.22.5 M ALMOMalmo (Johnson et al., 2016) is an artificial intelligence experimentation platform of the famousgame ”Minecraft” . Although Malmo consists of only a single game, it presents numerous challengessince the ”Minecraft” game can be configured differently each time. The input to the RL algorithmsinclude specific features indicating the ”state” of the game and the current reward.2.6 D EEPMINDLABDeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithmson several different challenges: static/random map navigation, collect fruit (a form of reward) anda laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. InLAB the agent observations are the game screen (with an additional depth channel) and the velocityof the character. LAB supports four games (one game - four different modes).2.7 D EEPQ-L EARNINGIn our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015),an RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose actionthat maximize the final score). The state of the game is simply the game screen, and the action isa combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learnsthrough trial and error while trying to estimate the ”Q-function”, which predicts the cumulativediscounted reward at the end of the episode given the current state and action while following apolicy. The Q-function is represented using a convolution neural network that receives the screenas input and predicts the best possible action at it’s output. The Q-function weights are updatedaccording to:t+1(st;at) =t+(Rt+1+maxa(Qt(st+1;a;0t))Qt(st;at;t))rQt(st;at;t);(1)wherest,st+1are the current and next states, atis the action chosen, is the step size, is thediscounting factor Rt+1is the reward received by applying atatst.0represents the previousweights of the network that are updated periodically. Other than DQN, we examined two leadingalgorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al., 2015), a DQNbased algorithm with a modified network update rule. Dueling Double DQN (Wang et al., 2015),a modification of D-DQN’s architecture in which the Q-function is modeled using a state (screen)dependent estimator and an action dependent estimator.3 T HERETRO LEARNING ENVIRONMENT3.1 S UPER NINTENDO ENTERTAINMENT SYSTEMThe Super Nintendo Entertainment System (SNES) is a home video game console developed byNintendo and released in 1990. A total of 783 games were released, among them, the iconic SuperMario World ,Donkey Kong Country andThe Legend of Zelda . Table (1) presents a comparisonbetween Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES andGenesis games are far more complex.3.2 I MPLEMENTATIONTo allow easier integration with current platforms and algorithms, we based our environment on theALE, with the aim of maintaining as much of its interface as possible. While the ALE is highlycoupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learningenvironment from the emulator. This was achieved by incorporating an interface named LibRetro (li-bRetro site), that allows communication between front-end programs to game-console emulators.Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti-mated total of over 7,000 games that can potentially be supported using this interface. Examples ofsupported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,1http://stella.sourceforge.net/3Saturn, Dreamcast and Sony PlayStation . We chose to focus on the SNES game console imple-mented using the snes9x2as it’s games present interesting, yet plausible to overcome challenges.Additionally, we utilized the Genesis-Plus-GX3emulator, which supports several Sega consoles:Genesis/Mega Drive, Master System, Game Gear and SG-1000.3.3 S OURCE CODERLE is fully available as open source software for use under GNU’s General Public License4. Theenvironment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Addinga new game to the environment is a relatively simple process.3.4 RLE I NTERFACERLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper tothe LibRetro interface. Initialization of the environment is done by providing a game (ROM file)and a gaming-console (denoted by ’core’). Upon initialization, the first state is the initial frame ofthe game, skipping all menu selection screens. The cores are provided with the RLE and installedtogether with the environment. Actions have a bit-wise representation where each controller buttonis represented by a one-hot vector. Therefore a combination of several buttons is possible usingthe bit-wise OR operator. The number of valid buttons combinations is larger than 700, thereforeonly the meaningful combinations are provided. The environments observation is the game screen,provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. Thereward can be defined differently per game, usually we set it to be the score difference betweentwo consecutive frames. By setting different configuration to the environment, it is possible to alterin-game properties such as difficulty (i.e easy, medium, hard), its characters, levels, etc.Table 1: Atari 2600, SNES and Genesis comparisonAtari 2600 SNES GenesisNumber of Games 565 783 928CPU speed 1.19MHz 3.58MHz 7.6 MHzROM size 2-4KB 0.5-6MB 16 MBytesRAM size 128 bytes 128KB 72KBColor depth 8 bit 16 bit 16 bitScreen Size 160x210 256x224 or 512x448 320x224Number of controller buttons 5 12 11Possible buttons combinations 18 over 720 over 1003.5 E NVIRONMENT CHALLENGESIntegrating SNES and Genesis with RLE presents new challenges to the field of RL where visualinformation in the form of an image is the only state available to the agent. Obviously, SNES gamesare significantly more complex and unpredictable than Atari games. For example in sports games,such as NBA, while the player (agent) controls a single player, all the other nine players’ behavior isdetermined by pre-programmed agents, each exhibiting random behavior. In addition, many SNESgames exhibit delayed rewards in the course of their play (i.e., reward for an actions is given manytime steps after it was performed). Similarly, in some of the SNES games, an agent can obtain areward that is indirectly related to the imposed task. For example, in platform games, such as SuperMario , reward is received for collecting coins and defeating enemies, while the goal of the challengeis to reach the end of the level which requires to move to keep moving to the right. Moreover,upon completing a level, a score bonus is given according to the time required for its completion.Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too muchtime. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of2http://www.snes9x.com/3https://github.com/ekeeke/Genesis-Plus-GX4https://github.com/nadavbh12/Retro-Learning-Environment4eight directions and one action button, SNES has eight-directions pad and six actions buttons. Sincecombinations of buttons are allowed, and required at times, the actual actions space may be largerthan 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNESis very rich, filled with details which may move locally or across the screen, effectively acting asnon-stationary noise since it provided little to no information regarding the state itself. Finally, wenote that SNES utilized the first 3D games. In the game Wolfenstein , the player must navigate amaze from a first-person perspective, while dodging and attacking enemies. The SNES offers plentyof other 3D games such as flight and racing games which exhibit similar challenges. These gamesare much more realistic, thus inferring from SNES games to ”real world” tasks, as in the case ofself driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, ispresented in Figure (1).Figure 1: Atari 2600 and SNES game screen comparison: Left: ”Boxing” an Atari 2600 fightinggame , Right: ”Mortal Kombat” a SNES fighting game. Note the exceptional difference in theamount of details between the two games. Therefore, distinguishing a relevant signal from noise ismuch more difficult.Table 2: Comparison between RLE and the latest RL environmentsCharacteristics RLE OpenAI Inifinte ALE Project DeepMindUniverse Mario Malmo LabNumber of Games 8 out of 7000+ 1000+ 1 74 1 4In game Yes NO No No Yes Yesadjustments1Frame rate 530fps2(SNES) 60fps 5675fps2120fps<7000fps <1000fpsObservation (Input) screen, Screen hand crafted screen, hand crafted screen + depthRAM features RAM features and velocity1Allowing changes in-the game configurations (e.g., changing difficulty, characters,etc.)2Measured on an i7-5930k CPU4 E XPERIMENTS4.1 E VALUATION METHODOLOGYThe evaluation methodology that we used for benchmarking the different algorithms is the popularmethod proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reachedconvergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated byperforming 30 episodes of every game. Each episode ends either by reaching a terminal state orafter 5 minutes. The results are averaged per game and compared to the average result of a humanplayer. For each game the human player was given two hours for training, and his performanceswere evaluated over 20 episodes. As the various algorithms don’t use the game audio in the learningprocess, the audio was muted for both the agent and the human. From both, humans and agents5score, a random agent score (an agent performing actions randomly) was subtracted to assure thatlearning indeed occurred. It is important to note that DQN’s -greedy approach (select a randomaction with a small probability ) is present during testing thus assuring that the same sequenceof actions isn’t repeated. While the screen dimensions in SNES are larger than those of Atari, inour experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn’t affecta human’s ability to play the game, therefore suitable for RL algorithms as well. To handle thelarge action space, we limited the algorithm’s actions to the minimal button combinations whichprovide unique behavior. For example, on many games the R and L action buttons don’t have anyuse therefore their use and combinations were omitted.4.1.1 R ESULTSA thorough comparison of the four different agents’ performances on SNES games can be seen inFigure (). The full results can be found in Table (3). Only in the game Mortal Kombat a trainedagent was able to surpass a expert human player performance as opposed to Atari games where thesame algorithms have surpassed a human player on the vast majority of the games.One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks,navigating in a maze and detecting object. As evident from figure (2), all agents produce poor resultsindicating a lack of the required properties. By using -greedy approach the agents weren’t able toexplore enough states (or even other rooms in our case). The algorithm’s final policy appeared asa random walk in a 3D space. Exploration based on visited states such as presented in Bellemareet al. (2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling,flight-shooter game. While the trained agent was able to master the technical aspects of the game,which includes shooting incoming enemies and dodging their projectiles, it’s final score is still farfrom a human’s. This is due to a hidden game mechanism in the form of ”power-ups”, which can beaccumulated, and significantly increase the players abilities. The more power-ups collected withoutbeing use — the larger their final impact will be. While this game-mechanism is evident to a human,the agent acts myopically and uses the power-up straight away5.4.2 R EWARD SHAPINGAs part of the environment and algorithm evaluation process, we investigated two case studies. Firstis a game on which DQN had failed to achieve a better-than-random score, and second is a game onwhich the training duration was significantly longer than that of other games.In the first case study, we used a 2D back-view racing game ”F-Zero”. In this game, one is requiredto complete four laps of the track while avoiding other race cars. The reward, as defined by the scoreof the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lapmay last as long as 30 seconds, which span over 450 states (actions) before reward is received. SinceDQN’s exploration is a simple -greedy approach, it was not able to produce a useful strategy. Weapproached this issue using reward shaping, essentially a modification of the reward to be a functionof the reward and the observation, rather than the reward alone. Here, we define the reward to bethe sum of the score and the agent’s speed (a metric displayed on the screen of the game). Indeedwhen the reward was defined as such, the agents learned to finish the race in first place within a shorttraining period.The second case study is the famous game of Super Mario. In this game the agent, Mario, is requiredto reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foundthis case interesting as it involves several challenges at once: dynamic background that can changedrastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemiesand pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach theend of the level without any reward shaping, this was possible since the agent receives rewards forevents (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player,causing the agent to prefer moving right. However, the training time required for convergence wassignificantly longer than other games. We defined the reward as the sum of the in-game reward anda bonus granted according the the player’s position, making moving right preferable. This reward5A video demonstration can be found at https://youtu.be/nUl9XLMveEU6Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting thea random agent’s score and dividing by the human player score. Thus 100 represents a human playerand zero a random agent.proved useful, as training time required for convergence decreased significantly. The two gamesabove can be seen in Figure (3).Figure (4) illustrates the agent’s average value function . Though both were able complete the stagetrained upon, the convergence rate with reward shaping is significantly quicker due to the immediaterealization of the agent to move rightwards.Figure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent tomaster them game after less training time. Right: The game F-Zero . By granting a reward for speedthe agent was able to master this game, as oppose to using solely the in-game reward.7Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right(blue) and without (red).4.3 M ULTI -AGENT REINFORCEMENT LEARNINGIn this section we describe our experiments with RLE’s multi-agent capabilities. We consider thecase where the number of agents, n= 2 and the goals of the agents are opposite, as in r1=r2.This scheme is known as fully competitive (Bus ̧oniu et al., 2010). We used the simple single-agent RL approach (as described by Bus ̧oniu et al. (2010) section 5.4.1) which is to apply to sin-gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto(1996) and Matari ́c (1997). More elaborate schemes are possible such as the minimax-Q algo-rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conductedthree experiments on this setup: the first use was to train two different agents against the in-gameAI, as done in previous sections, and evaluate their performance by letting them compete againsteach other. Here, rather than achieving the highest score, the goal was to win a tournament whichconsist of 50 rounds, as common in human-player competitions. The second experiment was toinitially train two agents against the in-game AI, and resume the training while competing againsteach other. In this case, we evaluated the agent by playing again against the in-game AI, separately.Finally, in our last experiment we try to boost the agent capabilities by alternated it’s opponents,switching between the in-game AI and other trained agents.4.3.1 M ULTI -AGENT REINFORCEMENT LEARNING RESULTSWe chose the game Mortal Kombat , a two character side viewed fighting game (a screenshot ofthe game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties:both players share the same screen, the agent’s optimal policy is heavily dependent on the rival’sbehavior, unlike racing games for example. In order to evaluate two agents fairly, both were trainedusing the same characters maintaining the identity of rival and agent. Furthermore, to remove theimpact of the starting positions of both agents on their performances, the starting positions wereinitialized randomly.In the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN.Each agent was trained against the in-game AI until convergence. Then 50 matches were performedbetween the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 againstD-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn’t far from the randomcase, since the algorithms converged into a policy in which movement towards the opponent is not8required rather than generalize the game. Therefore, in many episodes, little interaction between thetwo agents occur, leading to a semi-random outcome.In our second experiment, we continued the training process of a the D-DQN network by letting itcompete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yetwhen faced again against the in-game AI its performance deteriorated drastically (from an average of17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellowet al., 2013) even though the agents played the same game.In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in-game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, suchthat in each episode a different rival was playing as the opponent with the intention of preventingthe agent from learning a policy suitable for just one opponent. The new agent was able to achievea score of 162,966 (compared to the ”normal” dueling D-DQN which achieved 169,633). As anew and objective measure of generalization, we’ve configured the in-game AI difficulty to be ”veryhard” (as opposed to the default ”medium” difficulty). In this metric the alternating version achieved83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus,proving that the agent learned to generalize to other policies which weren’t observed while training.4.4 F UTURE CHALLENGESAs demonstrated, RLE presents numerous challenges that have yet to be answered. In addition tobeing able to learn all available games, the task of learning games in which reward delay is extreme,such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games,such as Super Mario, feature several stages that differ in background and the levels structure. Thetask of generalizing platform games, as in learning on one stage and being tested on the other, isanother unexplored challenge. Likewise surpassing human performance remains a challenge sincecurrent state-of-the-art algorithms still struggling with the many SNES games.5 C ONCLUSIONWe introduced a rich environment for evaluating and developing reinforcement learning algorithmswhich presents significant challenges to current state-of-the-art algorithms. In comparison to otherenvironments RLE provides a large amount of games with access to both the screen and the in-game state. The modular implementation we chose allows extensions of the environment with newconsoles and games, thus ensuring the relevance of the environment to RL algorithms for years tocome (see Table (2)). We’ve encountered several games in which the learning process is highlydependent on the reward definition. This issue can be addressed and explored in RLE as rewarddefinition can be done easily. The challenges presented in the RLE consist of: 3D interpretation,delayed reward, noisy background, stochastic AI behavior and more. Although some algorithmswere able to play successfully on part of the games, to fully overcome these challenges, an agentmust incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platformfor future RL research.6 A CKNOWLEDGMENTSThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, AlfredAgrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.REFERENCESM. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The arcade learning environment: Anevaluation platform for general agents. Journal of Artificial Intelligence Research , 47:253–279,jun 2013.9M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868 , 2016.B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcementlearning for robot navigation. In ESANN , 2013.G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openaigym. arXiv preprint arXiv:1606.01540 , 2016.L. Bus ̧oniu, R. Babu ˇska, and B. De Schutter. Multi-agent reinforcement learning: An overview. InInnovations in Multi-Agent Systems and Applications-1 , pages 183–221. Springer, 2010.M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence , 134(1):57–83, 2002.R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advancesin Neural Information Processing Systems 8 . Citeseer, 1996.I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y . Bengio. An empirical investigation ofcatastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 , 2013.M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligenceexperimentation. In International Joint Conference On Artificial Intelligence (IJCAI) , page 4246,2016.libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed-ings of the eleventh international conference on machine learning , volume 157, pages 157–163,1994.M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re-search , 2(1):55–66, 2001.M. J. Matari ́c. Reinforcement learning in the multi-robot domain. In Robot colonies , pages 73–83.Springer, 1997.V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcementlearning. Nature , 518(7540):529–533, 2015.J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championshipcaliber checkers program. Artificial Intelligence , 53(2):273–289, 1992.S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-termprediction. arXiv preprint arXiv:1602.01580 , 2016.D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neuralnetworks and tree search. Nature , 529(7587):484–489, 2016.G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM , 38(3):58–68, 1995.J. Togelius, S. Karakovskiy, J. Koutn ́ık, and J. Schmidhuber. Super mario evolution. In 2009 IEEESymposium on Computational Intelligence and Games , pages 156–161. IEEE, 2009.Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR,abs/1509.06461 , 2015.Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcementlearning. arXiv preprint arXiv:1511.06581 , 2015.Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visualnavigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143 ,2016.10AppendicesExperimental ResultsTable 3: Average results of DQN ,D-DQN ,Dueling D-DQN and a Human playerDQN D-DQN Dueling D-DQN HumanF-Zero 3116 3636 5161 6298Gradius III 7583 12343 16929 24440Mortal Kombat 83733 56200 169300 132441Super Mario 11765 16946 20030 36386Wolfenstein 100 83 40 295211
H1f6QHHVl
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "Final review: Nice software contribution, expected more significant scientific contributions", "rating": "5: Marginally below acceptance threshold", "review": "The paper presents a new environment, called Retro Learning Environment (RLE), for reinforcement learning. The authors focus on Super Nintendo but claim that the interface supports many others (including ALE). Benchmark results are given for standard algorithms in 5 new Super Nintendo games, and some results using a new \"rivalry metric\".\n\nThese environments (or, more generally, standardized evaluation methods like public data sets, competitions, etc.) have a long history of improving the quality of AI and machine learning research. One example in the past few years was the Atari Learning Environment (ALE) which has now turned into a standard benchmark for comparison of algorithms and results. In this sense, the RLE could be a worthy contribution to the field by encouraging new challenging domains for research.\n\nThat said, the main focus of this paper is presenting this new framework and showcasing the importance of new challenging domains. The results of experiments themselves are for existing algorithms. There are some new results that show reward shaping and policy shaping (having a bias toward going right in Super Mario) help during learning. And, yes, domain knowledge helps, but this is obvious. The rivalry training is an interesting idea, when training against a different opponent, the learner overfits to that opponent and forgets to play against the in-game AI; but then oddly, it gets evaluated on how well it does against the in-game AI! \n\nAlso the part of the paper that describes the scientific results (especially the rivalry training) is less polished, so this is disappointing. In the end, I'm not very excited about this paper.\n\nI was hoping for a more significant scientific contribution to accompany in this new environment. It's not clear if this is necessary for publication, but also it's not clear that ICLR is the right venue for this work due to the contribution being mainly about the new code (for example, mloss.org could be a better 'venue', JMLR has an associated journal track for accompanying papers: http://www.jmlr.org/mloss/)\n\n--- Post response:\n\nThank you for the clarifications. Ultimately I have not changed my opinion on the paper. Though I do think RLE could have a nice impact long-term, there is little new science in this paper, ad it's either too straight-forward (reward shaping, policy-shaping) or not quite developed enough (rivalry training).", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=H1f6QHHVl
PLAYING SNES INTHE RETRO LEARNING ENVIRONMENTNadav Bhonker*, Shai Rozenberg* and Itay HubaraDepartment of Electrical EngineeringTechnion, Israel Institute of Technology(*) indicates equal contributionfnadavbh,shairoz g@tx.technion.ac.ilitayhubara@gmail.comABSTRACTMastering a video game requires skill, tactics and strategy. While these attributesmay be acquired naturally by human players, teaching them to a computer pro-gram is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were intro-duced, aiming to learn how to perform human tasks such as playing video games.As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) hasbecome a commonly used benchmark environment allowing algorithms to train onvarious Atari 2600 games. In many games the state-of-the-art algorithms outper-form humans. In this paper we introduce a new learning environment, the RetroLearning Environment — RLE, that can run games on the Super Nintendo Enter-tainment System (SNES), Sega Genesis and several other gaming consoles. Theenvironment is expandable, allowing for more video games and consoles to beeasily added to the environment, while maintaining the same interface as ALE.Moreover, RLE is compatible with Python and Torch. SNES games pose a signif-icant challenge to current algorithms due to their higher level of complexity andversatility.1 I NTRODUCTIONControlling artificial agents using only raw high-dimensional input data such as image or sound isa difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs inthe field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartzet al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world isusually either expensive or not feasible, as the real world is far too complex for the agent to perceive.Therefore in practice the interaction is simulated by a virtual environment which receives feedbackon a decision made by the algorithm. Traditionally, games were used as a RL environment, datingback to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro,1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and taskswhich are highly correlated with real-world problems. For example, an agent that masters a racinggame, by observing a simulated driver’s view screen as input, may be usefull for the development ofan autonomous driver. For high-dimensional input, the leading benchmark is the Arcade LearningEnvironment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-form, allowing a controlled experiment setup for algorithm evaluation and comparison. The mainchallenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-ing a score higher than an expert human player) without providing the algorithm any game-specificinformation (i.e., using the same input available to a human - the game screen and score). A keywork to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made abreakthrough in the field of Deep Reinforcement Learning by achieving human level performanceon 29 out of 49 games. In this work we present a new environment — the Retro Learning Environ-ment (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as wellas more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment1System (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only onewas able to outperform an expert human player. As an additional feature, RLE supports research ofmulti-agent reinforcement learning (MARL) tasks (Bus ̧oniu et al., 2010). We utilize this feature bytraining and evaluating the agents against each other, rather than against a pre-configured in-gameAI. We conducted several experiments with this new feature and discovered that agents tend to learnhow to overcome their current opponent rather than generalize the game being played. However, ifan agent is trained against an ensemble of different opponents, its robustness increases. The maincontributions of the paper are as follows:Introducing a novel RL environment with significant challenges and an easy agent evalu-ation technique (enabling agents to compete against each other) which could lead to newand more advanced RL algorithms.A new method to train an agent by enabling it to train against several opponents, makingthe final policy more robust.Encapsulating several different challenges to a single RL environment.2 R ELATED WORK2.1 A RCADE LEARNING ENVIRONMENTThe Arcade Learning Environment is a software framework designed for the development of RLalgorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms toselect an action and receive the Atari screen and a reward in every step. The action is the equivalentto a human’s joystick button combination and the reward is the difference between the scores attime stamptandt1. The diversity of games for Atari provides a solid benchmark since differentgames have significantly different goals. Atari 2600 has over 500 games, currently over 70 of themare implemented in ALE and are commonly used for algorithm comparison.2.2 I NFINITE MARIOInfinite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels arerandomly generated. On these levels the Mario AI Competition was held. During the competition,several algorithms were trained on Infinite Mario and their performances were measured in terms ofthe number of stages completed. As opposed to ALE, training is not based on the raw screen databut rather on an indication of Mario’s (the player’s) location and objects in its surrounding. Thisenvironment no longer poses a challenge for state of the art algorithms. Its main shortcoming liein the fact that it provides only a single game to be learnt. Additionally, the environment provideshand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the useof planning algorithms that highly outperform any learning based algorithm.2.3 O PENAI G YMThe OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creatingan interface between RL environments and algorithms for evaluation and comparison purposes.OpenAI Gym is currently very popular due to the large number of environments supported by it.For example ALE, Go, MouintainCar andVizDoom (Zhu et al., 2016), an environment for thelearning of the 3D first-person-shooter game ”Doom”. OpenAI Gym’s recent appearance and wideusage indicates the growing interest and research done in the field of RL.2.4 O PENAI U NIVERSEUniverse (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms cantrain on over a thousand games. Universe includes very advanced games such as GTA V , Portal aswell as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn’t run the games locally andrequires a VNC interface to a server that runs the games. This leads to a lower frame rate and thuslonger training times.22.5 M ALMOMalmo (Johnson et al., 2016) is an artificial intelligence experimentation platform of the famousgame ”Minecraft” . Although Malmo consists of only a single game, it presents numerous challengessince the ”Minecraft” game can be configured differently each time. The input to the RL algorithmsinclude specific features indicating the ”state” of the game and the current reward.2.6 D EEPMINDLABDeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithmson several different challenges: static/random map navigation, collect fruit (a form of reward) anda laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. InLAB the agent observations are the game screen (with an additional depth channel) and the velocityof the character. LAB supports four games (one game - four different modes).2.7 D EEPQ-L EARNINGIn our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015),an RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose actionthat maximize the final score). The state of the game is simply the game screen, and the action isa combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learnsthrough trial and error while trying to estimate the ”Q-function”, which predicts the cumulativediscounted reward at the end of the episode given the current state and action while following apolicy. The Q-function is represented using a convolution neural network that receives the screenas input and predicts the best possible action at it’s output. The Q-function weights are updatedaccording to:t+1(st;at) =t+(Rt+1+maxa(Qt(st+1;a;0t))Qt(st;at;t))rQt(st;at;t);(1)wherest,st+1are the current and next states, atis the action chosen, is the step size, is thediscounting factor Rt+1is the reward received by applying atatst.0represents the previousweights of the network that are updated periodically. Other than DQN, we examined two leadingalgorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al., 2015), a DQNbased algorithm with a modified network update rule. Dueling Double DQN (Wang et al., 2015),a modification of D-DQN’s architecture in which the Q-function is modeled using a state (screen)dependent estimator and an action dependent estimator.3 T HERETRO LEARNING ENVIRONMENT3.1 S UPER NINTENDO ENTERTAINMENT SYSTEMThe Super Nintendo Entertainment System (SNES) is a home video game console developed byNintendo and released in 1990. A total of 783 games were released, among them, the iconic SuperMario World ,Donkey Kong Country andThe Legend of Zelda . Table (1) presents a comparisonbetween Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES andGenesis games are far more complex.3.2 I MPLEMENTATIONTo allow easier integration with current platforms and algorithms, we based our environment on theALE, with the aim of maintaining as much of its interface as possible. While the ALE is highlycoupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learningenvironment from the emulator. This was achieved by incorporating an interface named LibRetro (li-bRetro site), that allows communication between front-end programs to game-console emulators.Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti-mated total of over 7,000 games that can potentially be supported using this interface. Examples ofsupported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,1http://stella.sourceforge.net/3Saturn, Dreamcast and Sony PlayStation . We chose to focus on the SNES game console imple-mented using the snes9x2as it’s games present interesting, yet plausible to overcome challenges.Additionally, we utilized the Genesis-Plus-GX3emulator, which supports several Sega consoles:Genesis/Mega Drive, Master System, Game Gear and SG-1000.3.3 S OURCE CODERLE is fully available as open source software for use under GNU’s General Public License4. Theenvironment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Addinga new game to the environment is a relatively simple process.3.4 RLE I NTERFACERLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper tothe LibRetro interface. Initialization of the environment is done by providing a game (ROM file)and a gaming-console (denoted by ’core’). Upon initialization, the first state is the initial frame ofthe game, skipping all menu selection screens. The cores are provided with the RLE and installedtogether with the environment. Actions have a bit-wise representation where each controller buttonis represented by a one-hot vector. Therefore a combination of several buttons is possible usingthe bit-wise OR operator. The number of valid buttons combinations is larger than 700, thereforeonly the meaningful combinations are provided. The environments observation is the game screen,provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. Thereward can be defined differently per game, usually we set it to be the score difference betweentwo consecutive frames. By setting different configuration to the environment, it is possible to alterin-game properties such as difficulty (i.e easy, medium, hard), its characters, levels, etc.Table 1: Atari 2600, SNES and Genesis comparisonAtari 2600 SNES GenesisNumber of Games 565 783 928CPU speed 1.19MHz 3.58MHz 7.6 MHzROM size 2-4KB 0.5-6MB 16 MBytesRAM size 128 bytes 128KB 72KBColor depth 8 bit 16 bit 16 bitScreen Size 160x210 256x224 or 512x448 320x224Number of controller buttons 5 12 11Possible buttons combinations 18 over 720 over 1003.5 E NVIRONMENT CHALLENGESIntegrating SNES and Genesis with RLE presents new challenges to the field of RL where visualinformation in the form of an image is the only state available to the agent. Obviously, SNES gamesare significantly more complex and unpredictable than Atari games. For example in sports games,such as NBA, while the player (agent) controls a single player, all the other nine players’ behavior isdetermined by pre-programmed agents, each exhibiting random behavior. In addition, many SNESgames exhibit delayed rewards in the course of their play (i.e., reward for an actions is given manytime steps after it was performed). Similarly, in some of the SNES games, an agent can obtain areward that is indirectly related to the imposed task. For example, in platform games, such as SuperMario , reward is received for collecting coins and defeating enemies, while the goal of the challengeis to reach the end of the level which requires to move to keep moving to the right. Moreover,upon completing a level, a score bonus is given according to the time required for its completion.Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too muchtime. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of2http://www.snes9x.com/3https://github.com/ekeeke/Genesis-Plus-GX4https://github.com/nadavbh12/Retro-Learning-Environment4eight directions and one action button, SNES has eight-directions pad and six actions buttons. Sincecombinations of buttons are allowed, and required at times, the actual actions space may be largerthan 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNESis very rich, filled with details which may move locally or across the screen, effectively acting asnon-stationary noise since it provided little to no information regarding the state itself. Finally, wenote that SNES utilized the first 3D games. In the game Wolfenstein , the player must navigate amaze from a first-person perspective, while dodging and attacking enemies. The SNES offers plentyof other 3D games such as flight and racing games which exhibit similar challenges. These gamesare much more realistic, thus inferring from SNES games to ”real world” tasks, as in the case ofself driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, ispresented in Figure (1).Figure 1: Atari 2600 and SNES game screen comparison: Left: ”Boxing” an Atari 2600 fightinggame , Right: ”Mortal Kombat” a SNES fighting game. Note the exceptional difference in theamount of details between the two games. Therefore, distinguishing a relevant signal from noise ismuch more difficult.Table 2: Comparison between RLE and the latest RL environmentsCharacteristics RLE OpenAI Inifinte ALE Project DeepMindUniverse Mario Malmo LabNumber of Games 8 out of 7000+ 1000+ 1 74 1 4In game Yes NO No No Yes Yesadjustments1Frame rate 530fps2(SNES) 60fps 5675fps2120fps<7000fps <1000fpsObservation (Input) screen, Screen hand crafted screen, hand crafted screen + depthRAM features RAM features and velocity1Allowing changes in-the game configurations (e.g., changing difficulty, characters,etc.)2Measured on an i7-5930k CPU4 E XPERIMENTS4.1 E VALUATION METHODOLOGYThe evaluation methodology that we used for benchmarking the different algorithms is the popularmethod proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reachedconvergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated byperforming 30 episodes of every game. Each episode ends either by reaching a terminal state orafter 5 minutes. The results are averaged per game and compared to the average result of a humanplayer. For each game the human player was given two hours for training, and his performanceswere evaluated over 20 episodes. As the various algorithms don’t use the game audio in the learningprocess, the audio was muted for both the agent and the human. From both, humans and agents5score, a random agent score (an agent performing actions randomly) was subtracted to assure thatlearning indeed occurred. It is important to note that DQN’s -greedy approach (select a randomaction with a small probability ) is present during testing thus assuring that the same sequenceof actions isn’t repeated. While the screen dimensions in SNES are larger than those of Atari, inour experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn’t affecta human’s ability to play the game, therefore suitable for RL algorithms as well. To handle thelarge action space, we limited the algorithm’s actions to the minimal button combinations whichprovide unique behavior. For example, on many games the R and L action buttons don’t have anyuse therefore their use and combinations were omitted.4.1.1 R ESULTSA thorough comparison of the four different agents’ performances on SNES games can be seen inFigure (). The full results can be found in Table (3). Only in the game Mortal Kombat a trainedagent was able to surpass a expert human player performance as opposed to Atari games where thesame algorithms have surpassed a human player on the vast majority of the games.One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks,navigating in a maze and detecting object. As evident from figure (2), all agents produce poor resultsindicating a lack of the required properties. By using -greedy approach the agents weren’t able toexplore enough states (or even other rooms in our case). The algorithm’s final policy appeared asa random walk in a 3D space. Exploration based on visited states such as presented in Bellemareet al. (2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling,flight-shooter game. While the trained agent was able to master the technical aspects of the game,which includes shooting incoming enemies and dodging their projectiles, it’s final score is still farfrom a human’s. This is due to a hidden game mechanism in the form of ”power-ups”, which can beaccumulated, and significantly increase the players abilities. The more power-ups collected withoutbeing use — the larger their final impact will be. While this game-mechanism is evident to a human,the agent acts myopically and uses the power-up straight away5.4.2 R EWARD SHAPINGAs part of the environment and algorithm evaluation process, we investigated two case studies. Firstis a game on which DQN had failed to achieve a better-than-random score, and second is a game onwhich the training duration was significantly longer than that of other games.In the first case study, we used a 2D back-view racing game ”F-Zero”. In this game, one is requiredto complete four laps of the track while avoiding other race cars. The reward, as defined by the scoreof the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lapmay last as long as 30 seconds, which span over 450 states (actions) before reward is received. SinceDQN’s exploration is a simple -greedy approach, it was not able to produce a useful strategy. Weapproached this issue using reward shaping, essentially a modification of the reward to be a functionof the reward and the observation, rather than the reward alone. Here, we define the reward to bethe sum of the score and the agent’s speed (a metric displayed on the screen of the game). Indeedwhen the reward was defined as such, the agents learned to finish the race in first place within a shorttraining period.The second case study is the famous game of Super Mario. In this game the agent, Mario, is requiredto reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foundthis case interesting as it involves several challenges at once: dynamic background that can changedrastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemiesand pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach theend of the level without any reward shaping, this was possible since the agent receives rewards forevents (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player,causing the agent to prefer moving right. However, the training time required for convergence wassignificantly longer than other games. We defined the reward as the sum of the in-game reward anda bonus granted according the the player’s position, making moving right preferable. This reward5A video demonstration can be found at https://youtu.be/nUl9XLMveEU6Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting thea random agent’s score and dividing by the human player score. Thus 100 represents a human playerand zero a random agent.proved useful, as training time required for convergence decreased significantly. The two gamesabove can be seen in Figure (3).Figure (4) illustrates the agent’s average value function . Though both were able complete the stagetrained upon, the convergence rate with reward shaping is significantly quicker due to the immediaterealization of the agent to move rightwards.Figure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent tomaster them game after less training time. Right: The game F-Zero . By granting a reward for speedthe agent was able to master this game, as oppose to using solely the in-game reward.7Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right(blue) and without (red).4.3 M ULTI -AGENT REINFORCEMENT LEARNINGIn this section we describe our experiments with RLE’s multi-agent capabilities. We consider thecase where the number of agents, n= 2 and the goals of the agents are opposite, as in r1=r2.This scheme is known as fully competitive (Bus ̧oniu et al., 2010). We used the simple single-agent RL approach (as described by Bus ̧oniu et al. (2010) section 5.4.1) which is to apply to sin-gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto(1996) and Matari ́c (1997). More elaborate schemes are possible such as the minimax-Q algo-rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conductedthree experiments on this setup: the first use was to train two different agents against the in-gameAI, as done in previous sections, and evaluate their performance by letting them compete againsteach other. Here, rather than achieving the highest score, the goal was to win a tournament whichconsist of 50 rounds, as common in human-player competitions. The second experiment was toinitially train two agents against the in-game AI, and resume the training while competing againsteach other. In this case, we evaluated the agent by playing again against the in-game AI, separately.Finally, in our last experiment we try to boost the agent capabilities by alternated it’s opponents,switching between the in-game AI and other trained agents.4.3.1 M ULTI -AGENT REINFORCEMENT LEARNING RESULTSWe chose the game Mortal Kombat , a two character side viewed fighting game (a screenshot ofthe game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties:both players share the same screen, the agent’s optimal policy is heavily dependent on the rival’sbehavior, unlike racing games for example. In order to evaluate two agents fairly, both were trainedusing the same characters maintaining the identity of rival and agent. Furthermore, to remove theimpact of the starting positions of both agents on their performances, the starting positions wereinitialized randomly.In the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN.Each agent was trained against the in-game AI until convergence. Then 50 matches were performedbetween the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 againstD-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn’t far from the randomcase, since the algorithms converged into a policy in which movement towards the opponent is not8required rather than generalize the game. Therefore, in many episodes, little interaction between thetwo agents occur, leading to a semi-random outcome.In our second experiment, we continued the training process of a the D-DQN network by letting itcompete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yetwhen faced again against the in-game AI its performance deteriorated drastically (from an average of17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellowet al., 2013) even though the agents played the same game.In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in-game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, suchthat in each episode a different rival was playing as the opponent with the intention of preventingthe agent from learning a policy suitable for just one opponent. The new agent was able to achievea score of 162,966 (compared to the ”normal” dueling D-DQN which achieved 169,633). As anew and objective measure of generalization, we’ve configured the in-game AI difficulty to be ”veryhard” (as opposed to the default ”medium” difficulty). In this metric the alternating version achieved83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus,proving that the agent learned to generalize to other policies which weren’t observed while training.4.4 F UTURE CHALLENGESAs demonstrated, RLE presents numerous challenges that have yet to be answered. In addition tobeing able to learn all available games, the task of learning games in which reward delay is extreme,such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games,such as Super Mario, feature several stages that differ in background and the levels structure. Thetask of generalizing platform games, as in learning on one stage and being tested on the other, isanother unexplored challenge. Likewise surpassing human performance remains a challenge sincecurrent state-of-the-art algorithms still struggling with the many SNES games.5 C ONCLUSIONWe introduced a rich environment for evaluating and developing reinforcement learning algorithmswhich presents significant challenges to current state-of-the-art algorithms. In comparison to otherenvironments RLE provides a large amount of games with access to both the screen and the in-game state. The modular implementation we chose allows extensions of the environment with newconsoles and games, thus ensuring the relevance of the environment to RL algorithms for years tocome (see Table (2)). We’ve encountered several games in which the learning process is highlydependent on the reward definition. This issue can be addressed and explored in RLE as rewarddefinition can be done easily. The challenges presented in the RLE consist of: 3D interpretation,delayed reward, noisy background, stochastic AI behavior and more. Although some algorithmswere able to play successfully on part of the games, to fully overcome these challenges, an agentmust incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platformfor future RL research.6 A CKNOWLEDGMENTSThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, AlfredAgrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.REFERENCESM. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The arcade learning environment: Anevaluation platform for general agents. Journal of Artificial Intelligence Research , 47:253–279,jun 2013.9M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868 , 2016.B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcementlearning for robot navigation. In ESANN , 2013.G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openaigym. arXiv preprint arXiv:1606.01540 , 2016.L. Bus ̧oniu, R. Babu ˇska, and B. De Schutter. Multi-agent reinforcement learning: An overview. InInnovations in Multi-Agent Systems and Applications-1 , pages 183–221. Springer, 2010.M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence , 134(1):57–83, 2002.R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advancesin Neural Information Processing Systems 8 . Citeseer, 1996.I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y . Bengio. An empirical investigation ofcatastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 , 2013.M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligenceexperimentation. In International Joint Conference On Artificial Intelligence (IJCAI) , page 4246,2016.libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed-ings of the eleventh international conference on machine learning , volume 157, pages 157–163,1994.M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re-search , 2(1):55–66, 2001.M. J. Matari ́c. Reinforcement learning in the multi-robot domain. In Robot colonies , pages 73–83.Springer, 1997.V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcementlearning. Nature , 518(7540):529–533, 2015.J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championshipcaliber checkers program. Artificial Intelligence , 53(2):273–289, 1992.S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-termprediction. arXiv preprint arXiv:1602.01580 , 2016.D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neuralnetworks and tree search. Nature , 529(7587):484–489, 2016.G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM , 38(3):58–68, 1995.J. Togelius, S. Karakovskiy, J. Koutn ́ık, and J. Schmidhuber. Super mario evolution. In 2009 IEEESymposium on Computational Intelligence and Games , pages 156–161. IEEE, 2009.Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR,abs/1509.06461 , 2015.Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcementlearning. arXiv preprint arXiv:1511.06581 , 2015.Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visualnavigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143 ,2016.10AppendicesExperimental ResultsTable 3: Average results of DQN ,D-DQN ,Dueling D-DQN and a Human playerDQN D-DQN Dueling D-DQN HumanF-Zero 3116 3636 5161 6298Gradius III 7583 12343 16929 24440Mortal Kombat 83733 56200 169300 132441Super Mario 11765 16946 20030 36386Wolfenstein 100 83 40 295211
Sy3UiUz4l
HysBZSqlx
ICLR.cc/2017/conference/-/paper238/official/review
{"title": "Ok but limited contributions", "rating": "4: Ok but not good enough - rejection", "review": "This paper introduces a new reinforcement learning environment called \u00ab The Retro Learning Environment\u201d, that interfaces with the open-source LibRetro API to offer access to various emulators and associated games (i.e. similar to the Atari 2600 Arcade Learning Environment, but more generic). The first supported platform is the SNES, with 5 games (more consoles and games may be added later). Authors argue that SNES games pose more challenges than Atari\u2019s (due to more complex graphics, AI and game mechanics). Several DQN variants are evaluated in experiments, and it is also proposed to compare learning algorihms by letting them compete against each other in multiplayer games.\n\nI like the idea of going toward more complex games than those found on Atari 2600, and having an environment where new consoles and games can easily be added sounds promising. With OpenAI Universe and DeepMind Lab that just came out, though, I am not sure we really need another one right now. Especially since using ROMs of emulated games we do not own is technically illegal: it looks like this did not cause too much trouble for Atari but it might start raising eyebrows if the community moves to more advanced and recent games, especially some Nintendo still makes money from.\n\nBesides the introduction of the environment, it is good to have DQN benchmarks on five games, but this does not add a lot of value. The authors also mention as contribution \"A new benchmarking technique, allowing algorithms to compete against each other, rather than playing against the in-game AI\", but this seems a bit exaggerated to me: the idea of pitting AIs against each other has been at the core of many AI competitions for decades, so it is hardly something new. The finding that reinforcement learning algorithms tend to specialize to their opponent is also not particular surprising.\n\nOverall I believe this is an ok paper but I do not feel it brings enough to the table for a major conference. This does not mean, however, that this new environment won't find a spot in the (now somewhat crowded) space of game-playing frameworks.\n\nOther small comments:\n- There are lots of typos (way too many to mention them all)\n- It is said that Infinite Mario \"still serves as a benchmark platform\", however as far as I know it had to be shutdown due to Nintendo not being too happy about it\n- \"RLE requires an emulator and a computer version of the console game (ROM file) upon initialization rather than a ROM file only. The emulators are provided with RLE\" => how is that different from ALE that requires the emulator Stella which is also provided with ALE?\n- Why is there no DQN / DDDQN result on Super Mario?\n- It is not clear if Figure 2 displays the F-Zero results using reward shaping or not\n- The Du et al reference seems incomplete", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Playing SNES in the Retro Learning Environment
["Nadav Bhonker", "Shai Rozenberg", "Itay Hubara"]
Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outperform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility.
["Reinforcement Learning", "Deep learning", "Games"]
https://openreview.net/forum?id=HysBZSqlx
https://openreview.net/pdf?id=HysBZSqlx
https://openreview.net/forum?id=HysBZSqlx&noteId=Sy3UiUz4l
PLAYING SNES INTHE RETRO LEARNING ENVIRONMENTNadav Bhonker*, Shai Rozenberg* and Itay HubaraDepartment of Electrical EngineeringTechnion, Israel Institute of Technology(*) indicates equal contributionfnadavbh,shairoz g@tx.technion.ac.ilitayhubara@gmail.comABSTRACTMastering a video game requires skill, tactics and strategy. While these attributesmay be acquired naturally by human players, teaching them to a computer pro-gram is a far more challenging task. In recent years, extensive research was carriedout in the field of reinforcement learning and numerous algorithms were intro-duced, aiming to learn how to perform human tasks such as playing video games.As a result, the Arcade Learning Environment (ALE) (Bellemare et al., 2013) hasbecome a commonly used benchmark environment allowing algorithms to train onvarious Atari 2600 games. In many games the state-of-the-art algorithms outper-form humans. In this paper we introduce a new learning environment, the RetroLearning Environment — RLE, that can run games on the Super Nintendo Enter-tainment System (SNES), Sega Genesis and several other gaming consoles. Theenvironment is expandable, allowing for more video games and consoles to beeasily added to the environment, while maintaining the same interface as ALE.Moreover, RLE is compatible with Python and Torch. SNES games pose a signif-icant challenge to current algorithms due to their higher level of complexity andversatility.1 I NTRODUCTIONControlling artificial agents using only raw high-dimensional input data such as image or sound isa difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs inthe field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartzet al., 2016), navigation (Bischoff et al., 2013) and more. Agent interaction with the real world isusually either expensive or not feasible, as the real world is far too complex for the agent to perceive.Therefore in practice the interaction is simulated by a virtual environment which receives feedbackon a decision made by the algorithm. Traditionally, games were used as a RL environment, datingback to Chess (Campbell et al., 2002), Checkers (Schaeffer et al., 1992), backgammon (Tesauro,1995) and the more recent Go (Silver et al., 2016). Modern games often present problems and taskswhich are highly correlated with real-world problems. For example, an agent that masters a racinggame, by observing a simulated driver’s view screen as input, may be usefull for the development ofan autonomous driver. For high-dimensional input, the leading benchmark is the Arcade LearningEnvironment (ALE) (Bellemare et al., 2013) which provides a common interface to dozens of Atari2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-form, allowing a controlled experiment setup for algorithm evaluation and comparison. The mainchallenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-ing a score higher than an expert human player) without providing the algorithm any game-specificinformation (i.e., using the same input available to a human - the game screen and score). A keywork to tackle this problem is the Deep Q-Networks algorithm (Mnih et al., 2015), which made abreakthrough in the field of Deep Reinforcement Learning by achieving human level performanceon 29 out of 49 games. In this work we present a new environment — the Retro Learning Environ-ment (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as wellas more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment1System (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only onewas able to outperform an expert human player. As an additional feature, RLE supports research ofmulti-agent reinforcement learning (MARL) tasks (Bus ̧oniu et al., 2010). We utilize this feature bytraining and evaluating the agents against each other, rather than against a pre-configured in-gameAI. We conducted several experiments with this new feature and discovered that agents tend to learnhow to overcome their current opponent rather than generalize the game being played. However, ifan agent is trained against an ensemble of different opponents, its robustness increases. The maincontributions of the paper are as follows:Introducing a novel RL environment with significant challenges and an easy agent evalu-ation technique (enabling agents to compete against each other) which could lead to newand more advanced RL algorithms.A new method to train an agent by enabling it to train against several opponents, makingthe final policy more robust.Encapsulating several different challenges to a single RL environment.2 R ELATED WORK2.1 A RCADE LEARNING ENVIRONMENTThe Arcade Learning Environment is a software framework designed for the development of RLalgorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms toselect an action and receive the Atari screen and a reward in every step. The action is the equivalentto a human’s joystick button combination and the reward is the difference between the scores attime stamptandt1. The diversity of games for Atari provides a solid benchmark since differentgames have significantly different goals. Atari 2600 has over 500 games, currently over 70 of themare implemented in ALE and are commonly used for algorithm comparison.2.2 I NFINITE MARIOInfinite Mario (Togelius et al., 2009) is a remake of the classic Super Mario game in which levels arerandomly generated. On these levels the Mario AI Competition was held. During the competition,several algorithms were trained on Infinite Mario and their performances were measured in terms ofthe number of stages completed. As opposed to ALE, training is not based on the raw screen databut rather on an indication of Mario’s (the player’s) location and objects in its surrounding. Thisenvironment no longer poses a challenge for state of the art algorithms. Its main shortcoming liein the fact that it provides only a single game to be learnt. Additionally, the environment provideshand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the useof planning algorithms that highly outperform any learning based algorithm.2.3 O PENAI G YMThe OpenAI gym (Brockman et al., 2016) is an open source platform with the purpose of creatingan interface between RL environments and algorithms for evaluation and comparison purposes.OpenAI Gym is currently very popular due to the large number of environments supported by it.For example ALE, Go, MouintainCar andVizDoom (Zhu et al., 2016), an environment for thelearning of the 3D first-person-shooter game ”Doom”. OpenAI Gym’s recent appearance and wideusage indicates the growing interest and research done in the field of RL.2.4 O PENAI U NIVERSEUniverse (Universe, 2016) is a platform within the OpenAI framework in which RL algorithms cantrain on over a thousand games. Universe includes very advanced games such as GTA V , Portal aswell as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn’t run the games locally andrequires a VNC interface to a server that runs the games. This leads to a lower frame rate and thuslonger training times.22.5 M ALMOMalmo (Johnson et al., 2016) is an artificial intelligence experimentation platform of the famousgame ”Minecraft” . Although Malmo consists of only a single game, it presents numerous challengessince the ”Minecraft” game can be configured differently each time. The input to the RL algorithmsinclude specific features indicating the ”state” of the game and the current reward.2.6 D EEPMINDLABDeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithmson several different challenges: static/random map navigation, collect fruit (a form of reward) anda laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. InLAB the agent observations are the game screen (with an additional depth channel) and the velocityof the character. LAB supports four games (one game - four different modes).2.7 D EEPQ-L EARNINGIn our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al., 2015),an RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose actionthat maximize the final score). The state of the game is simply the game screen, and the action isa combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learnsthrough trial and error while trying to estimate the ”Q-function”, which predicts the cumulativediscounted reward at the end of the episode given the current state and action while following apolicy. The Q-function is represented using a convolution neural network that receives the screenas input and predicts the best possible action at it’s output. The Q-function weights are updatedaccording to:t+1(st;at) =t+(Rt+1+maxa(Qt(st+1;a;0t))Qt(st;at;t))rQt(st;at;t);(1)wherest,st+1are the current and next states, atis the action chosen, is the step size, is thediscounting factor Rt+1is the reward received by applying atatst.0represents the previousweights of the network that are updated periodically. Other than DQN, we examined two leadingalgorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al., 2015), a DQNbased algorithm with a modified network update rule. Dueling Double DQN (Wang et al., 2015),a modification of D-DQN’s architecture in which the Q-function is modeled using a state (screen)dependent estimator and an action dependent estimator.3 T HERETRO LEARNING ENVIRONMENT3.1 S UPER NINTENDO ENTERTAINMENT SYSTEMThe Super Nintendo Entertainment System (SNES) is a home video game console developed byNintendo and released in 1990. A total of 783 games were released, among them, the iconic SuperMario World ,Donkey Kong Country andThe Legend of Zelda . Table (1) presents a comparisonbetween Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES andGenesis games are far more complex.3.2 I MPLEMENTATIONTo allow easier integration with current platforms and algorithms, we based our environment on theALE, with the aim of maintaining as much of its interface as possible. While the ALE is highlycoupled with the Atari emulator, Stella1, RLE takes a different approach and separates the learningenvironment from the emulator. This was achieved by incorporating an interface named LibRetro (li-bRetro site), that allows communication between front-end programs to game-console emulators.Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti-mated total of over 7,000 games that can potentially be supported using this interface. Examples ofsupported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,1http://stella.sourceforge.net/3Saturn, Dreamcast and Sony PlayStation . We chose to focus on the SNES game console imple-mented using the snes9x2as it’s games present interesting, yet plausible to overcome challenges.Additionally, we utilized the Genesis-Plus-GX3emulator, which supports several Sega consoles:Genesis/Mega Drive, Master System, Game Gear and SG-1000.3.3 S OURCE CODERLE is fully available as open source software for use under GNU’s General Public License4. Theenvironment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Addinga new game to the environment is a relatively simple process.3.4 RLE I NTERFACERLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper tothe LibRetro interface. Initialization of the environment is done by providing a game (ROM file)and a gaming-console (denoted by ’core’). Upon initialization, the first state is the initial frame ofthe game, skipping all menu selection screens. The cores are provided with the RLE and installedtogether with the environment. Actions have a bit-wise representation where each controller buttonis represented by a one-hot vector. Therefore a combination of several buttons is possible usingthe bit-wise OR operator. The number of valid buttons combinations is larger than 700, thereforeonly the meaningful combinations are provided. The environments observation is the game screen,provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. Thereward can be defined differently per game, usually we set it to be the score difference betweentwo consecutive frames. By setting different configuration to the environment, it is possible to alterin-game properties such as difficulty (i.e easy, medium, hard), its characters, levels, etc.Table 1: Atari 2600, SNES and Genesis comparisonAtari 2600 SNES GenesisNumber of Games 565 783 928CPU speed 1.19MHz 3.58MHz 7.6 MHzROM size 2-4KB 0.5-6MB 16 MBytesRAM size 128 bytes 128KB 72KBColor depth 8 bit 16 bit 16 bitScreen Size 160x210 256x224 or 512x448 320x224Number of controller buttons 5 12 11Possible buttons combinations 18 over 720 over 1003.5 E NVIRONMENT CHALLENGESIntegrating SNES and Genesis with RLE presents new challenges to the field of RL where visualinformation in the form of an image is the only state available to the agent. Obviously, SNES gamesare significantly more complex and unpredictable than Atari games. For example in sports games,such as NBA, while the player (agent) controls a single player, all the other nine players’ behavior isdetermined by pre-programmed agents, each exhibiting random behavior. In addition, many SNESgames exhibit delayed rewards in the course of their play (i.e., reward for an actions is given manytime steps after it was performed). Similarly, in some of the SNES games, an agent can obtain areward that is indirectly related to the imposed task. For example, in platform games, such as SuperMario , reward is received for collecting coins and defeating enemies, while the goal of the challengeis to reach the end of the level which requires to move to keep moving to the right. Moreover,upon completing a level, a score bonus is given according to the time required for its completion.Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too muchtime. Analysis of such games is presented in section 4.2. Moreover, unlike Atari that consists of2http://www.snes9x.com/3https://github.com/ekeeke/Genesis-Plus-GX4https://github.com/nadavbh12/Retro-Learning-Environment4eight directions and one action button, SNES has eight-directions pad and six actions buttons. Sincecombinations of buttons are allowed, and required at times, the actual actions space may be largerthan 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNESis very rich, filled with details which may move locally or across the screen, effectively acting asnon-stationary noise since it provided little to no information regarding the state itself. Finally, wenote that SNES utilized the first 3D games. In the game Wolfenstein , the player must navigate amaze from a first-person perspective, while dodging and attacking enemies. The SNES offers plentyof other 3D games such as flight and racing games which exhibit similar challenges. These gamesare much more realistic, thus inferring from SNES games to ”real world” tasks, as in the case ofself driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, ispresented in Figure (1).Figure 1: Atari 2600 and SNES game screen comparison: Left: ”Boxing” an Atari 2600 fightinggame , Right: ”Mortal Kombat” a SNES fighting game. Note the exceptional difference in theamount of details between the two games. Therefore, distinguishing a relevant signal from noise ismuch more difficult.Table 2: Comparison between RLE and the latest RL environmentsCharacteristics RLE OpenAI Inifinte ALE Project DeepMindUniverse Mario Malmo LabNumber of Games 8 out of 7000+ 1000+ 1 74 1 4In game Yes NO No No Yes Yesadjustments1Frame rate 530fps2(SNES) 60fps 5675fps2120fps<7000fps <1000fpsObservation (Input) screen, Screen hand crafted screen, hand crafted screen + depthRAM features RAM features and velocity1Allowing changes in-the game configurations (e.g., changing difficulty, characters,etc.)2Measured on an i7-5930k CPU4 E XPERIMENTS4.1 E VALUATION METHODOLOGYThe evaluation methodology that we used for benchmarking the different algorithms is the popularmethod proposed by (Mnih et al., 2015). Each examined algorithm is trained until either it reachedconvergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated byperforming 30 episodes of every game. Each episode ends either by reaching a terminal state orafter 5 minutes. The results are averaged per game and compared to the average result of a humanplayer. For each game the human player was given two hours for training, and his performanceswere evaluated over 20 episodes. As the various algorithms don’t use the game audio in the learningprocess, the audio was muted for both the agent and the human. From both, humans and agents5score, a random agent score (an agent performing actions randomly) was subtracted to assure thatlearning indeed occurred. It is important to note that DQN’s -greedy approach (select a randomaction with a small probability ) is present during testing thus assuring that the same sequenceof actions isn’t repeated. While the screen dimensions in SNES are larger than those of Atari, inour experiments we maintained the same pre-processing of DQN (i.e., downscaling the image to84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn’t affecta human’s ability to play the game, therefore suitable for RL algorithms as well. To handle thelarge action space, we limited the algorithm’s actions to the minimal button combinations whichprovide unique behavior. For example, on many games the R and L action buttons don’t have anyuse therefore their use and combinations were omitted.4.1.1 R ESULTSA thorough comparison of the four different agents’ performances on SNES games can be seen inFigure (). The full results can be found in Table (3). Only in the game Mortal Kombat a trainedagent was able to surpass a expert human player performance as opposed to Atari games where thesame algorithms have surpassed a human player on the vast majority of the games.One example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks,navigating in a maze and detecting object. As evident from figure (2), all agents produce poor resultsindicating a lack of the required properties. By using -greedy approach the agents weren’t able toexplore enough states (or even other rooms in our case). The algorithm’s final policy appeared asa random walk in a 3D space. Exploration based on visited states such as presented in Bellemareet al. (2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling,flight-shooter game. While the trained agent was able to master the technical aspects of the game,which includes shooting incoming enemies and dodging their projectiles, it’s final score is still farfrom a human’s. This is due to a hidden game mechanism in the form of ”power-ups”, which can beaccumulated, and significantly increase the players abilities. The more power-ups collected withoutbeing use — the larger their final impact will be. While this game-mechanism is evident to a human,the agent acts myopically and uses the power-up straight away5.4.2 R EWARD SHAPINGAs part of the environment and algorithm evaluation process, we investigated two case studies. Firstis a game on which DQN had failed to achieve a better-than-random score, and second is a game onwhich the training duration was significantly longer than that of other games.In the first case study, we used a 2D back-view racing game ”F-Zero”. In this game, one is requiredto complete four laps of the track while avoiding other race cars. The reward, as defined by the scoreof the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lapmay last as long as 30 seconds, which span over 450 states (actions) before reward is received. SinceDQN’s exploration is a simple -greedy approach, it was not able to produce a useful strategy. Weapproached this issue using reward shaping, essentially a modification of the reward to be a functionof the reward and the observation, rather than the reward alone. Here, we define the reward to bethe sum of the score and the agent’s speed (a metric displayed on the screen of the game). Indeedwhen the reward was defined as such, the agents learned to finish the race in first place within a shorttraining period.The second case study is the famous game of Super Mario. In this game the agent, Mario, is requiredto reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foundthis case interesting as it involves several challenges at once: dynamic background that can changedrastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemiesand pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach theend of the level without any reward shaping, this was possible since the agent receives rewards forevents (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player,causing the agent to prefer moving right. However, the training time required for convergence wassignificantly longer than other games. We defined the reward as the sum of the in-game reward anda bonus granted according the the player’s position, making moving right preferable. This reward5A video demonstration can be found at https://youtu.be/nUl9XLMveEU6Figure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting thea random agent’s score and dividing by the human player score. Thus 100 represents a human playerand zero a random agent.proved useful, as training time required for convergence decreased significantly. The two gamesabove can be seen in Figure (3).Figure (4) illustrates the agent’s average value function . Though both were able complete the stagetrained upon, the convergence rate with reward shaping is significantly quicker due to the immediaterealization of the agent to move rightwards.Figure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent tomaster them game after less training time. Right: The game F-Zero . By granting a reward for speedthe agent was able to master this game, as oppose to using solely the in-game reward.7Figure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving right(blue) and without (red).4.3 M ULTI -AGENT REINFORCEMENT LEARNINGIn this section we describe our experiments with RLE’s multi-agent capabilities. We consider thecase where the number of agents, n= 2 and the goals of the agents are opposite, as in r1=r2.This scheme is known as fully competitive (Bus ̧oniu et al., 2010). We used the simple single-agent RL approach (as described by Bus ̧oniu et al. (2010) section 5.4.1) which is to apply to sin-gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto(1996) and Matari ́c (1997). More elaborate schemes are possible such as the minimax-Q algo-rithm (Littman, 1994), (Littman, 2001). These may be explored in future works. We conductedthree experiments on this setup: the first use was to train two different agents against the in-gameAI, as done in previous sections, and evaluate their performance by letting them compete againsteach other. Here, rather than achieving the highest score, the goal was to win a tournament whichconsist of 50 rounds, as common in human-player competitions. The second experiment was toinitially train two agents against the in-game AI, and resume the training while competing againsteach other. In this case, we evaluated the agent by playing again against the in-game AI, separately.Finally, in our last experiment we try to boost the agent capabilities by alternated it’s opponents,switching between the in-game AI and other trained agents.4.3.1 M ULTI -AGENT REINFORCEMENT LEARNING RESULTSWe chose the game Mortal Kombat , a two character side viewed fighting game (a screenshot ofthe game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties:both players share the same screen, the agent’s optimal policy is heavily dependent on the rival’sbehavior, unlike racing games for example. In order to evaluate two agents fairly, both were trainedusing the same characters maintaining the identity of rival and agent. Furthermore, to remove theimpact of the starting positions of both agents on their performances, the starting positions wereinitialized randomly.In the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN.Each agent was trained against the in-game AI until convergence. Then 50 matches were performedbetween the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 againstD-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn’t far from the randomcase, since the algorithms converged into a policy in which movement towards the opponent is not8required rather than generalize the game. Therefore, in many episodes, little interaction between thetwo agents occur, leading to a semi-random outcome.In our second experiment, we continued the training process of a the D-DQN network by letting itcompete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yetwhen faced again against the in-game AI its performance deteriorated drastically (from an average of17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellowet al., 2013) even though the agents played the same game.In our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in-game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, suchthat in each episode a different rival was playing as the opponent with the intention of preventingthe agent from learning a policy suitable for just one opponent. The new agent was able to achievea score of 162,966 (compared to the ”normal” dueling D-DQN which achieved 169,633). As anew and objective measure of generalization, we’ve configured the in-game AI difficulty to be ”veryhard” (as opposed to the default ”medium” difficulty). In this metric the alternating version achieved83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus,proving that the agent learned to generalize to other policies which weren’t observed while training.4.4 F UTURE CHALLENGESAs demonstrated, RLE presents numerous challenges that have yet to be answered. In addition tobeing able to learn all available games, the task of learning games in which reward delay is extreme,such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games,such as Super Mario, feature several stages that differ in background and the levels structure. Thetask of generalizing platform games, as in learning on one stage and being tested on the other, isanother unexplored challenge. Likewise surpassing human performance remains a challenge sincecurrent state-of-the-art algorithms still struggling with the many SNES games.5 C ONCLUSIONWe introduced a rich environment for evaluating and developing reinforcement learning algorithmswhich presents significant challenges to current state-of-the-art algorithms. In comparison to otherenvironments RLE provides a large amount of games with access to both the screen and the in-game state. The modular implementation we chose allows extensions of the environment with newconsoles and games, thus ensuring the relevance of the environment to RL algorithms for years tocome (see Table (2)). We’ve encountered several games in which the learning process is highlydependent on the reward definition. This issue can be addressed and explored in RLE as rewarddefinition can be done easily. The challenges presented in the RLE consist of: 3D interpretation,delayed reward, noisy background, stochastic AI behavior and more. Although some algorithmswere able to play successfully on part of the games, to fully overcome these challenges, an agentmust incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platformfor future RL research.6 A CKNOWLEDGMENTSThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, AlfredAgrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs.REFERENCESM. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The arcade learning environment: Anevaluation platform for general agents. Journal of Artificial Intelligence Research , 47:253–279,jun 2013.9M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count-based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868 , 2016.B. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcementlearning for robot navigation. In ESANN , 2013.G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openaigym. arXiv preprint arXiv:1606.01540 , 2016.L. Bus ̧oniu, R. Babu ˇska, and B. De Schutter. Multi-agent reinforcement learning: An overview. InInnovations in Multi-Agent Systems and Applications-1 , pages 183–221. Springer, 2010.M. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence , 134(1):57–83, 2002.R. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advancesin Neural Information Processing Systems 8 . Citeseer, 1996.I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y . Bengio. An empirical investigation ofcatastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 , 2013.M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligenceexperimentation. In International Joint Conference On Artificial Intelligence (IJCAI) , page 4246,2016.libRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03.M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceed-ings of the eleventh international conference on machine learning , volume 157, pages 157–163,1994.M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Re-search , 2(1):55–66, 2001.M. J. Matari ́c. Reinforcement learning in the multi-robot domain. In Robot colonies , pages 73–83.Springer, 1997.V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcementlearning. Nature , 518(7540):529–533, 2015.J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championshipcaliber checkers program. Artificial Intelligence , 53(2):273–289, 1992.S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-termprediction. arXiv preprint arXiv:1602.01580 , 2016.D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neuralnetworks and tree search. Nature , 529(7587):484–489, 2016.G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM , 38(3):58–68, 1995.J. Togelius, S. Karakovskiy, J. Koutn ́ık, and J. Schmidhuber. Super mario evolution. In 2009 IEEESymposium on Computational Intelligence and Games , pages 156–161. IEEE, 2009.Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR,abs/1509.06461 , 2015.Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcementlearning. arXiv preprint arXiv:1511.06581 , 2015.Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visualnavigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143 ,2016.10AppendicesExperimental ResultsTable 3: Average results of DQN ,D-DQN ,Dueling D-DQN and a Human playerDQN D-DQN Dueling D-DQN HumanF-Zero 3116 3636 5161 6298Gradius III 7583 12343 16929 24440Mortal Kombat 83733 56200 169300 132441Super Mario 11765 16946 20030 36386Wolfenstein 100 83 40 295211
BJOY_CR7g
rkaRFYcgl
ICLR.cc/2017/conference/-/paper514/official/review
{"title": "Exploring a solid idea, but results are not convincing", "rating": "6: Marginally above acceptance threshold", "review": "The authors study the use of low-rank approximation to the matrix-multiply in RNNs. This reduces the number of parameters by a large factor, and with a diagonal addition (called low-rank plus diagonal) it is shown to work as well as a fully-parametrized network on a number of tasks.\n\nThe paper is solid, the only weakness being some claims about conceptual unification (e.g., the first line of the conclusion -- \"We presented a framework that unifies the description various types of recurrent and feed-forward\nneural networks as passthrough neural networks.\" -- claiming this framework as a contribution of this paper is untrue, the general framework is well known in the community and RNNs have been presented in this way before.)\n\nAside from the above small point, the true contribution is in making low-rank RNNs work, the results are generally as good as fully-parametrized networks. They are hardly better though, which makes it unclear why low-rank networks should be used. The contribution is thus not very strong in terms of results, but even achieving the same results with fewer parameters is not easy and the studies were well-executed and explained.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Low-rank passthrough neural networks
["Antonio Valerio Miceli Barone"]
Deep learning consists in training neural networks to perform computations that sequentially unfold in many steps over a time dimension or an intrinsic depth dimension. For large depths, this is usually accomplished by specialized network architectures that are designed to mitigate the vanishing gradient problem, e.g. LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are based on a single structural principle: the state passthrough. We observe that these "Passthrough Networks" architectures enable the decoupling of the network state size from the number of parameters of the network, a possibility that is exploited in some recent works but not thoroughly explored. In this work we propose simple, yet effective, low-rank and low-rank plus diagonal matrix parametrizations for Passthrough Networks which exploit this decoupling property, reducing the data complexity and memory requirements of the network while preserving its memory capacity. We present competitive experimental results on several tasks, including a near state of the art result on sequential randomly-permuted MNIST classification, a hard task on natural data.
["Deep learning"]
https://openreview.net/forum?id=rkaRFYcgl
https://openreview.net/pdf?id=rkaRFYcgl
https://openreview.net/forum?id=rkaRFYcgl&noteId=BJOY_CR7g
Under review as a conference paper at ICLR 2017LOW-RANK PASSTHROUGH NEURAL NETWORKSAntonio Valerio Miceli BaroneSchool of InformaticsThe University of Edinburghamiceli@inf.ed.ac.ukABSTRACTDeep learning consists in training neural networks to perform computations thatsequentially unfold in many steps over a time dimension or an intrinsic depthdimension. For large depths, this is usually accomplished by specialized networkarchitectures that are designed to mitigate the vanishing gradient problem, e.g.LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are basedon a single structural principle: the state passthrough. We observe that these"Passthrough Networks" architectures enable the decoupling of the network statesize from the number of parameters of the network, a possibility that is exploitedin some recent works but not thoroughly explored. In this work we propose simple,yet effective, low-rank and low-rank plus diagonal matrix parametrizations forPassthrough Networks which exploit this decoupling property, reducing the datacomplexity and memory requirements of the network while preserving its memorycapacity. We present competitive experimental results on several tasks, including anear state of the art result on sequential randomly-permuted MNIST classification,a hard task on natural data.1 O VERVIEWDeep neural networks can perform non-trivial computations by the repeated the application ofparametric non-linear transformation layers to vectorial (or, more generally, tensorial) data. Thisstaging of many computation steps can be done over a time dimension for tasks involving sequentialinputs or outputs of varying length, yielding a recurrent neural network , or over an intrinsic circuitdepth dimension, yielding a deep feed-forward neural network , or both. Training these deep modelsis complicated by the exploding andvanishing gradient problems (Hochreiter, 1991; Bengio et al.,1994).Various network architectures have been proposed to ameliorate the vanishing gradient problem inthe recurrent setting, such as the LSTM (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber,2005), the GRU (Cho et al., 2014b), etc. These architectures led to a number of breakthroughsin different tasks in NLP, computer vision, etc. (Graves et al., 2013; Cho et al., 2014a; Bahdanauet al., 2014; Vinyals et al., 2014; Iyyer et al., 2014). Similar methods have also been applied in thefeed-forward setting with architectures such as Highway Networks (Srivastava et al., 2015), DeepResidual Networks (He et al., 2015), and so on. All these architectures are based on a single structuralprinciple which, in this work, we will refer to as the state passthrough . We will thus refer to thesearchitectures as Passthrough Networks .Another difficulty in training neural networks is the trade-off between the network representationpower and its number of trainable parameters, which affects its data complexity during training inaddition to its implementation memory requirements. On one hand, the number of parameters can bethought as the number of tunable "knobs" that need to be set to represent a function, on the otherhand, it also constrains the size of the partial results that are propagated inside the network. In typicalfully connected networks, a layer acting on a n-dimensional state vector has O(n2)parameters storedin one or more matrices, but there can be many functions of practical interest that are simple enoughto be represented by a relatively small number of bits while still requiring some sizable amount ofmemory to be computed. Therefore, representing these functions on a fully connected neural networkWork partially done while affiliated with University of Pisa.1Under review as a conference paper at ICLR 2017can be wasteful in terms of number of parameters. The full parameterization implies that, at each step,all the information in each state component can affect all the information in any state component atthe next step. Classical physical systems, however, consist of spatially separated parts with primarilylocal interactions, long-distance interactions are possible but they tend to be limited by propagationdelays, bandwidth and noise. Therefore it may be beneficial to bias our model class towards modelsthat tend to adhere to these physical constraints by using a parametrization which reduces the numberof parameters required to represent them. This can be accomplished by imposing some constraintson thennmatrices that parametrize the state transitions. One way of doing this is to imposea convolutional structure on these matrices (LeCun et al., 2004; Krizhevsky et al., 2012), whichcorresponds to strict locality and periodicity constraints as in a cellular automaton. These constraintswork well in certain domains such as vision, but may be overly restrictive in other domains.In this work we observe that the state passthrough allows for a systematic decoupling of the networkstate size from the number of parameters: since by default the state vector passes mostly unalteredthrough the layers, each layer can be made simple enough to be described only by a small number ofparameters without affecting the overall memory capacity of the network, effectively spreading thecomputation over the depth or time dimension of the network, but without making the network "thin".This has been exploited by some convolutional passthrough architectures (Srivastava et al., 2015; Heet al., 2015; Kaiser & Sutskever, 2015), or architectures with addressable read-write memory (Graveset al., 2014; Danihelka et al., 2016).In this work we propose simple but effective low-dimensional parametrizations that exploit thisdecoupling based on low-rank or low-rank plus diagonal matrix decompositions. Our approachextends the LSTM architecture with a single projection layer by Sak et al. (2014) which has beenapplied to speech recognition, natural language modeling (Józefowicz et al., 2016), video analysis(Sun et al., 2015), etc. We provide experimental evaluation of our approach on GRU and LSTMarchitectures on various machine learning tasks, including a near state of the art result for the hardtask of sequential randomly-permuted MNIST image recognition (Le et al., 2015).2 M ODELA neural network can be described as a dynamical system that transforms an input uinto an output yover multiple time steps T. At each step tthe network has a n-dimensional state vector x(t)2Rndefined asx(t) =in(u;) ift= 0f(x(t1);t;u; )ift1(1)whereinis astate initialization function ,fis astate transition function and2Rkis vector oftrainable parameters. The output y=out(x(0 :T);)is generated by an output function out, wherex(0 :T)denotes the whole sequence of states visited during the execution. In a feed-forward neuralnetwork with constant hidden layer width n, the inputu2Rmand the output y2Rlare vectors offixed dimension mandlrespectively, Tis a model hyperparameter. In a recurrent neural networkthe inputuis typically a list of T m -dimensional vectors u(t)2Rmfort21;:::;T whereTis variable, the output yis either a single l-dimensional vector or a list of Tsuch vectors. Otherneural architectures, such as "seq2seq" transducers without attention (Cho et al., 2014a), can be alsodescribed within this framework.2.1 P ASSTHROUGH NETWORKSPassthrough networks can be defined as networks where the state transition function fhas a specialform such that, at each step tthe state vector x(t)(or a sub-vector ^x(t)) is propagated to the next stepmodified only by some (nearly) linear, element-wise transformation.Let the state vector x(t)(^x(t);~x(t))be the concatenation of ^x(t)2R^nand~x(t)2R~nwith^n+ ~n=n(where ~ncan be equal to zero). We define a network to have a state passthrough on^xif^xevolves as^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; ) (2)wherefis the next state proposal function ,fis the transform function ,fis the carry function anddenotes element-wise vector multiplication. The rest of the state vector ~x(t), if present, evolves2Under review as a conference paper at ICLR 2017^x(t−1)fγfτfπ+^x(t)xWa)xRb)LxRc)L00D+Figure 1: Left: Generic state passthrough hidden layer, optional non-passthrough state ~x(t)and per-timestep input u(t)are not shown. Right: a) Full matrix parametrization. b) Low-rank parametrization.c) Low-rank plus diagonal parametrization.according to some other function ~f. In practice ~x(t)is only used in LSTM variants, while in otherpassthrough architectures ^x(t) =x(t).As concrete example, we can describe a fully connected Highway Network asf(x(t1);t;u; ) =g((W)tx(t1) +(b)t)f(x(t1);t;u; ) =((W)tx(t1) +(b)t)f(x(t1);t;u; ) = 1nf(x(t1);t;u; )(3)wheregis an element-wise activation function, usually the ReLU (Glorot et al., 2011) or thehyperbolic tangent, is the element-wise logistic sigmoid, and 8t21;:::;T , the parameters (W)tand(W)t are matrices inRnnand(b)tand(b)tare vectors inRn. Dependence on the input uoccurs only through the initialization function, which is model-specific and is omitted here, as is theoutput function.2.2 L OW-RANK PASSTHROUGH NETWORKSIn fully connected architectures there are nnmatrices that act on the state vector, such as the(W)t and(W)t matrices of the Highway Network of eq. 3. Each of these matrices has n2entries,thus for large n, the entries of these matrices can make up the majority of independently trainableparameters of the model. As discussed in the previous section, this parametrization can be wasteful.We impose a low-rank constraint on these matrices. This is easily accomplished by rewriting each ofthese matrices as the product of two matrices where the inner dimension dis a model hyperparameter.For instance, in the case of the Highway Network of eq. 3 we can redefine 8t21;:::;T(W)t =(L)t(R)t(W)t =(L)t(R)t(4)where(L)t;(L)t2Rndand(R)t;(R)t2Rdn. Whend<n= 2this result in a reduction ofthe number of trainable parameters of the model.Even whenn=2d<n , while the total number of parameter increases, the number of degrees offreedom of the model still decreases, because low-rank factorization are unique only up to arbitraryddinvertible matrices, thus the number of independent degrees of freedom of a low-rank layer is3Under review as a conference paper at ICLR 20172ndd2. However, we don’t know whether the training optimizers can exploit this kind of redundancy,thus in this work we restrict to low-rank parametrizations where the number of parameters is strictlyreduced.This low-rank constraint can be thought as a bandwidth constraint on the computation performed ateach step: the Rmatrices first project the state into a smaller subspace, extracting the informationneeded for that specific step, then the Lmatrices project it back to the original state space, spreadingthe selected information to all the state components that need to be updated. A similar approach hasbeen proposed for the LSTM architecture by Sak et al. (2014), although they force the Rmatrices tobe the same for all the functions of the state transition, while we allow each parameter matrix to beparametrized independently by a pair of RandLmatrices.Low-rank passthrough architectures are universal in that they retain the same representation classesof their parent architectures. This is trivially true if the inner dimension dis allowed to be O(n)inthe worst case, and for some architectures even if dis held constant. For instance, it is easily shownthat for any Highway Network with state size nandThidden layers and for any >0, there exist aLow-rank Highway Network with d= 1, state size at most 2nand at mostnTlayers that computesthe same function within an margin of error.2.3 L OW-RANK PLUS DIAGONAL PASSTHROUGH NETWORKSAs we show in the experimental section, on some tasks the low-rank constraint may prove to beexcessively restrictive if the goal is to train a model with fewer parameters than one with arbitrarymatrices. A simple extension is to add to each low-rank parameter matrix a diagonal parametermatrix, yielding a matrix that is full-rank but still parametrized in a low-dimensional space. Forinstance, for the Highway Network architecture we modify eq. 4 to(W)t =(L)t(R)t+(D)t(W)t =(L)t(R)t+(D)t(5)where(D)t;(D)t2Rnnare trainable diagonal parameter matrices.It may seem that adding diagonal parameter matrices is redundant in passthrough networks. After all,the state passthrough itself can be considered as a diagonal matrix applied to the state vector, whichis then additively combined to the new proposed state computed by the ffunction. However, sincethe state passthrough completely skips over all non-linear activation functions, these formulationsare not equivalent. In particular, the low-rank plus diagonal parametrization may help in recurrentneural networks which receive input at each time step, since they allow each component of the statevector to directly control how much input signal is inserted into it at each step. We demonstratethe effectiveness of this model in the sequence copy and sequential MNIST tasks described in theexperiments section.3 E XPERIMENTSThe main content of this section reports several experiments on Low-rank and Low-rank plus diagonalGRUs, and an experiment using these parametrizations on a LSTM for language modeling.A preliminary experiment on Low-rank Highway Networks on the MNIST dataset is reported inappendix A.1.We applied the Low-rank and Low-rank plus diagonal GRU architectures to a subset of sequentialbenchmarks described in the Unitary Evolution Recurrent Neural Networks article by Arjovsky et al.(2015), specifically the memory task, the addition task and the sequential randomly permuted MNISTtask. For the memory tasks, we also considered two different variants proposed by Danihelka et al.(2016) and Henaff et al. (2016) which are hard for the uRNN architecture. We chose to compareagainst the uRNN architecture because it set state of the art results in terms of both data complexityand accuracy and because it is an architecture with similar design objectives as low-rank passthrougharchitectures, namely a low-dimensional parametrization and the mitigation of the vanishing gradientproblem, but it is based on quite different principles.4Under review as a conference paper at ICLR 2017The GRU architecture (Cho et al., 2014b) is a passthrough recurrent neural network defined asin(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) +(W)(x(t1)f!(x(t1);t;u; )) +(b))(6)We turn this architecture into the Low-rank GRU architecture by redefining each of the Wmatricesas the product of two matrices with inner dimension d. For the memory tasks, which turned out to bedifficult for the low-rank parametrization, we also consider the low-rank plus diagonal parametrization.We also applied the low-rank plus diagonal parametrization in the sequential permuted MNIST taskand a character-level language modeling task on the Penn Treebank corpus. For the languagemodeling task, we also experimented with Low-rank plus diagonal LSTMs. Refer to appendix A.2for model details.3.0.1 M EMORY TASKThe input of an instance of this task is a sequence of T=N+ 20 discrete symbols in a ten symbolalphabetai:i20;:::9, encoded as one-hot vectors. The first 10symbols in the sequence are "data"symbols i.i.d. sampled from a0;:::;a 7, followed by N1"blank"a8symbols, then a distinguished"run" symbol a9, followed by 10more "blank" a8symbols. The desired output sequence consistsofN+ 10 "blank"a8symbols followed by the 10"data" symbols as they appeared in the inputsequence. Therefore the model has to remember the 10"data" symbol string over the temporal gap ofsizeN, which is challenging for a recurrent neural network when Nis large. In our experiment wesetN= 500 , which is the hardest setting explored in the uRNN work. The training set consists of100;000training examples and 10;000validation/test examples. The architecture is described by eq.(6), with an additional output layer with a dense n10matrix followed a (biased) softmax. We trainto minimize the cross-entropy loss.We were able to solve this task using a GRU with full recurrent matrices with state size n= 128 ,learning rate 1103, mini-batch size 20, initial bias of the carry functions (the "update" gates)4:0, however this model has many more parameters, nearly 50;000in the recurrent layer only, thanthe uRNN work which has about 6;500, and it converges much more slowly than the uRNN. Wewere not able to achieve convergence with a pure low-rank model without exceeding the numberof parameters of the fully connected model, but we achieved fast convergence with a low-rank plusdiagonal model with d= 50 , with other hyperparameters set as above. This model has still moreparameters ( 39;168in the recurrent layer, 41;738total) than the uRNN model and converges moreslowly but still reasonably fast, reaching test cross-entropy <1103nats and almost perfectclassification accuracy in less than 35;000updates.In order to obtain a fair comparison, we also train a uRNN model with state size n= 721 , resultingin approximately the same number of parameters as the low-rank plus diagonal GRU models. Thismodel very quickly reaches perfect accuracy on the training set in less than 2;000updates, but overfitsw.r.t. the test set.We also consider two variants of this task which are difficult for the uRNN model. For both thesetasks we used the same settings as above except that the task size parameter is set at N= 100 forconsistency with the works that introduced these variants. In the variant of Danihelka et al. (2016), thelength of the sequence to be remembered is randomly sampled between 1and10for each sequence.They manage to achieve fast convergence with their Associative LSTM architecture with 65;505parameters, and slower convergence with standard LSTM models. Our low-rank plus diagonal GRUarchitecture, which has less parameters than their Associative LSTM, performs comparably or better,reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than30;000updates. In the variant of Henaff et al. (2016), the length of the sequence to be rememberedis fixed at 10but the model is expected to copy it after a variable number of time steps randomlychosen, for each sequence, between 1andN= 100 . The authors achieve slow convergence with astandard LSTM model, while our low-rank plus diagonal GRU architecture achieves fast convergence,5Under review as a conference paper at ICLR 20170 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Sequence copy with fixed lag N=500LRD-GRUURNN0 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Variable-length sequence copy with fixed lag N=100LRD-GRULRD-GRU-WNURNN0 100 200 300 400 500 600 700 800 900Minibatch number (hundreds)0.000.050.100.150.20Cross-entropy (nats)Sequence copy with variable lag N=100LRD-GRULRD-GRU-WNURNN0 20 40 60 80 100 120 140 160Minibatch number (hundreds)0.00.20.40.60.81.0Mean squared errorAddition T=7500 1000 2000 3000 4000 5000Minibatch number (hundreds)102030405060708090100Accuracy %Permuted sequential MNISTn=128, d=24n=512, d=40 2000 4000 6000 8000 10000Minibatch number (hundreds)020406080100Accuracy %Permuted sequential MNIST (low-rank plus diagonal)n=64, d=24n=128, d=24n=256, d=24n=128 (baseline)Figure 2: Top row and middle left: Low-rank plus diagonal GRU and uRNN on the sequence copytasks, cross-entropy on validation set. Middle right: Low-rank GRU on the addition task, meansquared error on validation set. Bottom row: Low-rank GRU (left) and Low-rank plus diagonal GRU(right) on the permuted sequential MNIST task, accuracy on validation set, horizontal line indicates90% accuracy.6Under review as a conference paper at ICLR 2017Table 1: Sequential permuted MNIST resultsArchitecture state size max rank params val. accuracy test accuracyBaseline GRU 128 - 51:0k 93:0% 92 :8%Low-rank GRU 128 24 20:2k 93:4% 91 :8%Low-rank GRU 512 4 19:5k 92:5% 91 :3%Low-rank plus diag. GRU 64 24 10:3k 93:1% 91 :9%Low-rank plus diag. GRU 128 24 20:6k 94:1% 93 :5%Low-rank plus diag. GRU 256 24 41:2k 95:1% 94:7%reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than38;000updates, and perfect test accuracy in 87;000updates.We further train uRNN models with state size n= 721 on these variants of the memory task. Wefound that the uRNN learns faster than the low-rank plus diagonal GRU on the variable length, fixedlag task (Danihelka et al., 2016) but fails to converge within our training time limit on the fixed length,variable lag task (Henaff et al., 2016).Training the low-rank plus diagonal GRU on these tasks incurs sometimes in numerical stabilityproblems as discussed in appendix A.2. In order to systemically address these issues, we also trainedmodels with weight normalization (Salimans & Kingma, 2016) and weight row max-norm constraints.These models turned out to be more stable and in fact converge faster, performing on par with theuRNN on the variable length, fixed lag task.Training curves are shown in figure 2 (top and middle left).3.0.2 A DDITION TASKFor each instance of this task, the input sequence has length Tand consists of two real-valuedcomponents, at each step the first component is independently sampled from the interval [0;1]withuniform probability, the second component is equal to zero everywhere except at two randomlychosen time step, one in each half of the sequence, where it is equal to one. The result is a single realvalue computed from the final state which we want to be equal to the sum of the two elements of thefirst component of the sequence at the positions where the second component was set at one. In ourexperiment we set T= 750 .The training set consists of 100;000training examples and 10;000validation/test examples. We usea Low-rank GRU with 2ninput matrix, n1output matrix and (biased) identity output activation.We train to minimize the mean squared error loss. We use state size n= 128 , maximum rank d= 24 .This results in approximately 6;140parameters in the recurrent hidden layer. Learning rate was set at1103, mini-batch size 20, initial bias of the carry functions (the "update" gates) was set to 4.We trained on 14;500mini-batches, obtaining a mean squared error on the test set of 0:003, which isa better result than the one reported in the uRNN article, in terms of training time and final accuracy.The training curve is shown in figure 2 (middle right).3.0.3 S EQUENTIAL MNIST TASKThis task consists of handwritten digit classification on the MNIST dataset with the caveat that theinput is presented to the model one pixel value at time, over T= 784 time steps. To further increasethe difficulty of the task, the inputs are reordered according to a random permutation (fixed for all thetask instances).We use Low-rank and Low-rank plus diagonal GRUs with 1ninput matrix, n10output matrixand (biased) softmax output activation. Learning rate was set at 5104, mini-batch size 20, initialbias of the carry functions (the "update" gates) was set to 5.Results are presented in table 1 and training curves are shown in figure 2 (bottom row). All thesemodels except the one with the most extreme bottleneck ( n= 512;d= 4) exceed the reported uRNNtest accuracy of 91:4%, although they converge more slowly (hundred of thousands updates vs. tensof thousands of the uRNN). Also note that the low-rank plus diagonal GRU is more accurate than the7Under review as a conference paper at ICLR 2017Table 2: Character-level language modeling resultsArchitecture dropout tied state size max rank params test per-char. perplexityBaseline GRU No - 1000 - 3:11M 2:96Baseline GRU Yes - 1000 - 3:11M 2:92Baseline GRU Yes - 3298 - 33:0M 2:77Baseline LSTM Yes - 1000 - 4:25M 2:92Low-rank plus diag. GRU No No 1000 64 0:49M 2:92Low-rank plus diag. GRU No No 3298 128 2:89M 2:95Low-rank plus diag. GRU Yes No 3298 128 2:89M 2:86Low-rank plus diag. GRU Yes No 5459 64 2:69M 2:82Low-rank plus diag. GRU Yes Yes 5459 64 1:99M 2:81Low-rank plus diag. GRU No Yes 1000 64 0:46M 2:90Low-rank plus diag. GRU Yes Yes 4480 128 2:78M 2:86Low-rank plus diag. GRU Yes Yes 6985 64 2:54M 2:76Low-rank plus diag. LSTM Yes No 1740 300 4:25M 2:86full rank GRU with the same state size, while the low-rank GRU is slightly less accurate (in terms oftest accuracy), indicating the utility of the diagonal component of the parametrization for this task.These are on par with more complex architectures with time-skip connections (Zhang et al., 2016)(reported test set accuracy 94:0%). To our knowledge, at the time of this writing, the best result onthis task is the LSTM with recurrent batch normalization by Cooijmans et al. (2016) (reported testset accuracy 95:2%). The architectural innovations of these works are orthogonal to our own and inprinciple they can be combined to it.3.0.4 C HARACTER -LEVEL LANGUAGE MODELING TASKThis standard benchmark task consist of predicting the probability of the next character in a sentenceafter having observed the previous charters. Similar to Zaremba et al. (2014), we use the PennTreebank English corpus, with standard training, validation and test splits.As a baseline we use a single layer GRU either with no regularization or regularized with Bayesianrecurrent dropout (Gal, 2015). Refer to appendix A.2 for details.In our experiments we consider the low-rank plus diagonal parametrization, both with tied and untiedprojection matrices. We set the state size and maximum rank to either reduce the total number ofparameters compared to the baselines or to keep the number of parameters approximately the samewhile increasing the memory capacity. Results are shown in table 2.Our low-rank plus diagonal parametrization reduces the model per-character perplexity (the base-2exponential of the bits-per-character entropy). Both the tied and untied versions perform equallywhen the state size is the same, but the tied version performs better when the number of parameters iskept the same, presumably due to the increased memory capacity of the state vector. Our best modelhas an extreme bottleneck, over a hundred of times smaller than the state size, while the word-levellanguage models trained by Józefowicz et al. (2016) use bottlenecks of four to eight times smallerthan the state size. We conjecture that this difference is due to our usage of the "plus diagonal"parametrization. In terms of absolute perplexity, our results are worse than published ones (e.g.Graves (2013)), although they may not be directly comparable since published results generally usedifferent training and evaluation schemes, such as preserving the network state between differentsentences.In order to address these experimental differences, we ran additional experiments using LSTMarchitectures, trying to replicate the alphabet and sentence segmentation used in Graves (2013),although we could not obtain the same baseline performance even using the Adam optimizer (usingSGD+momentum yields even worse results). In fact, we obtained approximately the same perplexityas our baseline GRU model with the same state size.8Under review as a conference paper at ICLR 2017We applied the Low-rank plus diagonal parametrizations to our LSTM architecture maintaining thesame number of parameters as the baseline. We obtained notable perplexity improvements over thebaseline. Refer to appendix A.3 for the experimental details.We performed additional exploratory experiments on word-level language modeling and subword-level neural machine translation (Bahdanau et al., 2014; Sennrich et al., 2015) with GRU-basedarchitectures but we were not able to achieve significant accuracy improvements, which is not particu-larly surprising given that in these models most parameters are contained in the token embedding andoutput matrices, thus low-dimensional parametrizations of the recurrent matrices have little effecton the total number of parameters. We reserve experimentation on character-level neural machinetranslation (Ling et al., 2015; Chung et al., 2016; Lee et al., 2016) to future work.4 C ONCLUSIONS AND FUTURE WORKWe proposed low-dimensional parametrizations for passthrough neural networks based on low-rankor low-rank plus diagonal decompositions of the nnmatrices that occur in the hidden layers.We experimentally compared our models with state of the art models, obtaining competitive resultsincluding a near state of the art for the randomly-permuted sequential MNIST task.Our parametrizations are alternative to convolutional parametrizations explored by Srivastava et al.(2015); He et al. (2015); Kaiser & Sutskever (2015). Since our architectural innovations are orthogonalto these approaches, they can be in principle combined. Additionally, alternative parametrizationscould include non-linear activation functions, similar to the network-in-network approach of Lin et al.(2013). We leave the exploration of these extensions to future work.REFERENCESArjovsky, Martin, Shah, Amar, and Bengio, Yoshua. Unitary evolution recurrent neural networks. CoRR ,abs/1511.06464, 2015. URL http://arxiv.org/abs/1511.06464 .Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning toalign and translate. CoRR , abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 .Bengio, Yoshua, Simard, Patrice, and Frasconi, Paolo. Learning long-term dependencies with gradient descentis difficult. Neural Networks, IEEE Transactions on , 5(2):157–166, 1994.Cho, Kyunghyun, van Merriënboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the properties of neuralmachine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 , 2014a.Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio,Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXivpreprint arXiv:1406.1078 , 2014b.Chung, Junyoung, Cho, Kyunghyun, and Bengio, Yoshua. A character-level decoder without explicit segmenta-tion for neural machine translation. arXiv preprint arXiv:1603.06147 , 2016.Cooijmans, T., Ballas, N., Laurent, C., Gülçehre, Ç., and Courville, A. Recurrent Batch Normalization. ArXive-prints , March 2016.Danihelka, I., Wayne, G., Uria, B., Kalchbrenner, N., and Graves, A. Associative Long Short-Term Memory.ArXiv e-prints , February 2016.Gal, Yarin. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprintarXiv:1512.05287 , 2015.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In InternationalConference on Artificial Intelligence and Statistics , pp. 315–323, 2011.Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 , 2013.Graves, Alex and Schmidhuber, Jürgen. Framewise phoneme classification with bidirectional lstm and otherneural network architectures. Neural Networks , 18(5):602–610, 2005.Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey E. Speech recognition with deep recurrent neuralnetworks. CoRR , abs/1303.5778, 2013. URL http://arxiv.org/abs/1303.5778 .9Under review as a conference paper at ICLR 2017Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401 , 2014.He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition.arXiv preprint arXiv:1512.03385 , 2015.Henaff, M., Szlam, A., and LeCun, Y . Orthogonal RNNs and Long-Memory Tasks. ArXiv e-prints , February2016.Hochreiter, Sepp. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische UniversitätMünchen , 1991.Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daumé III, Hal. A neural networkfor factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methodsin Natural Language Processing (EMNLP) , pp. 633–644, 2014.Józefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits oflanguage modeling. arXiv preprint arXiv:1602.02410 , 2016.Kaiser, Lukasz and Sutskever, Ilya. Neural gpus learn algorithms. CoRR , abs/1511.08228, 2015. URLhttp://arxiv.org/abs/1511.08228 .Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutionalneural networks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Le, Quoc V , Jaitly, Navdeep, and Hinton, Geoffrey E. A simple way to initialize recurrent networks of rectifiedlinear units. arXiv preprint arXiv:1504.00941 , 2015.LeCun, Yann, Huang, Fu Jie, and Bottou, Leon. Learning methods for generic object recognition with invarianceto pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the2004 IEEE Computer Society Conference on , volume 2, pp. II–97. IEEE, 2004.Lee, Jason, Cho, Kyunghyun, and Hofmann, Thomas. Fully character-level neural machine translation withoutexplicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400 , 2013.Ling, Wang, Trancoso, Isabel, Dyer, Chris, and Black, Alan W. Character-based neural machine translation.arXiv preprint arXiv:1511.04586 , 2015.Sak, Hasim, Senior, Andrew W, and Beaufays, Françoise. Long short-term memory recurrent neural networkarchitectures for large scale acoustic modeling. In INTERSPEECH , pp. 338–342, 2014.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to acceleratetraining of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Neural machine translation of rare words with subwordunits. arXiv preprint arXiv:1508.07909 , 2015.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Edinburgh neural machine translation systems for wmt16.arXiv preprint arXiv:1606.02891 , 2016.Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: Asimple way to prevent neural networks from overfitting. The Journal of Machine Learning Research , 15(1):1929–1958, 2014.Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, Jürgen. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Sun, Chen, Shetty, Sanketh, Sukthankar, Rahul, and Nevatia, Ram. Temporal localization of fine-grained actionsin videos by domain transfer from web images. In Proceedings of the 23rd Annual ACM Conference onMultimedia Conference , pp. 371–380. ACM, 2015.10Under review as a conference paper at ICLR 2017Tang, Yichuan. Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 , 2013.Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5 - rmsprop„ 2012.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Grammar as aforeign language. arXiv preprint arXiv:1412.7449 , 2014.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprintarXiv:1409.2329 , 2014.Zhang, Saizheng, Wu, Yuhuai, Che, Tong, Lin, Zhouhan, Memisevic, Roland, Salakhutdinov, Ruslan, and Bengio,Yoshua. Architectural complexity measures of recurrent neural networks. arXiv preprint arXiv:1602.08210 ,2016.A A PPENDIX : EXPERIMENTAL DETAILSA.1 L OW-RANK HIGHWAY NETWORKSAs a preliminary exploratory experiment, we applied the low-rank and low-rank plus diagonalHighway Network architecture to the classic benchmark task of handwritten digit classification onthe MNIST dataset, in its permutation-invariant (i.e. non-convolutional) variant.We used the low-rank architecture described by equations 3 and 4, with T= 5hidden layers, ReLUactivation function, state dimension n= 1024 and maximum rank (internal dimension) d= 256 .The input-to-state layer is a dense 7841024 matrix followed by a (biased) ReLU activation andthe state-to-output layer is a dense 102410matrix followed by a (biased) identity activation. Wedid not use any convolution layer, pooling layer or data augmentation technique. We used dropout(Srivastava et al., 2014) in order to achieve regularization. We further applied L2-regularization withcoefficient= 1103per example on the hidden-to-output parameter matrix. We also used batchnormalization (Ioffe & Szegedy, 2015) after the input-to-state matrix and after each parameter matrixin the hidden layers. Initial bias vectors are all initialized at zero except for those of the transformfunctions in the hidden layers, which are initialized at 1:0. We trained to minimize the sum of theper-class L2-hinge loss plus the L2-regularization cost (Tang, 2013). Optimization was performedusing Adam (Kingma & Ba, 2014) with standard hyperparameters, learning rate starting at 3103halving every three epochs without validation improvements. Mini-batch size was equal to 100. Codeis available online1.We obtained perfect training accuracy and 98:83% test accuracy. While this result does not reachthe state of the art for this task ( 99:13% test accuracy with unsupervised dimensionality reductionreported by Tang (2013)), it is still relatively close. We also tested the low-rank plus diagonalHighway Network architecture of eq. 5 with the same settings as above, obtaining a test accuracy of98:64%. The inclusion of diagonal parameter matrices does not seem to help in this particular task.A.2 L OW-RANK GRU SIn our experiments (except language modeling) we optimized using RMSProp (Tieleman & Hinton,2012) with gradient component clipping at 1. Code is available online2. Our code is based on thepublished uRNN code3(specifically, on the LSTM implementation) by the original authors for thesake of a fair comparison. In order to achieve convergence on the memory task however, we had toslightly modify the optimization procedure, specifically we changed gradient component clippingwith gradient norm clipping (with NaN detection and recovery), and we added a small = 1108term in the parameter update formula. No modifications of the original optimizer implementationwere required for the other tasks.In order to address the numerical instability issues in the memory tasks, we also consider a variantof our Low-rank plus diagonal GRU where apply weight normalization as described by Salimans &Kingma (2016) to all the parameter matrices except the output one and the diagonal matrices. All1https://github.com/Avmb/lowrank-highwaynetwork2https://github.com/Avmb/lowrank-gru3https://github.com/amarshah/complex_RNN11Under review as a conference paper at ICLR 2017these matrices have trainable scale parameters, except for the projection matrices. We further apply anhard constraint on the matrices row norms by clipping them at 10after each update. We disable NaNdetection and recovery during training. The rationale behind this approach, in addition to the generalbenefits of normalization, is that the low-rank parametrization potentially introduces stability issuesbecause the model is invariant to multiplying a row of an R-matrix by a scalar sand dividing thecorresponding column of the L-matrix bys, which in principle allows the parameters of either matrixto grow very large in magnitude, eventually resulting in overflows or other pathological behavior.The weight row max-norm constraint can counter this problem. But the constraint alone could makethe optimization problem harder by reducing and distorting the parameter space. Fortunately wecould counter this by weight normalization which makes the model invariant to the row-norms of theparameter matrices.In the language modeling experiment, for consistency with existing code, we used a variant of theGRU where the reset gate is applied after the multiplication by the recurrent proposal matrix ratherthan before. Specifically:in(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) + ((W)x(t1))f!(x(t1);t;u; ) +(b))(7)The character vocabulary size if 51, we use no character embeddings. Training is performed withAdam with learning rate 1103. Bayesian recurrent dropout was adapted from the original LSTMarchitecture of Gal (2015) to the GRU architecture as in Sennrich et al. (2016).Our implementation is based on the "dl4mt" tutorial4and the Nematus neural machine translationsystem5. The code is available online6.A.3 L OW-RANK LSTM SFor our LSTM experiments, we modified the implementation of LSTM language model with Bayesianrecurrent dropout by Gal (2015)7. In order to match the setup of Graves (2013) more closely, weused a vocabulary size of 49, no embedding layer and one LSTM layer. We found no difference onthe baseline model with using peephole connections and not using them, therefore we did not usethem on the Low-rank plus diagonal model. We use recurrent dropout and the Adam optimizer withlearning rate 2104.The baseline LSTM model is defined by the gates:in(u;) = 0^nf!(x(t1);t;u; ) =(U!u(t) +(W!)~x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =tanh(Uu(t) +(W)~x(t1) +(b))(8)with the state components evolving as:^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; )~x(t) =f!(x(t1);t;u; )tanh(^x(t))(9)The low-rank plus diagonal parametrization is applied on the recurrence matrices W?as in the GRUmodels.The code is available online8.4https://github.com/nyu-dl/dl4mt-tutorial5https://github.com/rsennrich/nematus6https://github.com/Avmb/dl4mt-lm/tree/master/lm7https://github.com/yaringal/BayesianRNN8https://github.com/Avmb/lowrank-lstm12
BkVSLugNx
rkaRFYcgl
ICLR.cc/2017/conference/-/paper514/official/review
{"title": "Review", "rating": "4: Ok but not good enough - rejection", "review": "The author proposes the use of low-rank matrix in feedfoward and RNNs. In particular, they try their approach in a GRU and a feedforward highway network.\n\nAuthor also presents as a contribution the passthrough framework, which can describe feedforward and recurrent networks. However, this framework seems hardly novel, relatively to the formalism introduced by LSTM or highway networks.\n\nAn empirical evaluation is performed on different datasets (MNIST, memory/addition tasks, sequential permuted MNIST and character level penntreebank). \n\nHowever, there are few problems with the evaluation:\n\n- In the highway network experiment, the author does not compare with a baseline.\nWe can not assess what it the impact of the low-rank parameterization. Also, it would be interesting to compare the result with a highway network that have this capacity bottleneck across layer (first layer of size $n$, second layer of size $d$, third layer of size $n$) and not in the gate functions. Also, how did you select the hyperparameter values?.\n\n- It is unfortunate that the character level penntreebank does not use the same experimental setting than previous works as it prevents from direct comparison.\nAlso the overall bpc perplexity seems relatively high for this dataset. It is therefore not clear how low-rank decomposition would perform on this task applied on a stronger baseline.\n\n-Author claims state-of-art in the memory task. However, their approach uses more parameters than the uRNN (41K against 6.5K for the memory) which makes the comparison a little bit unfair toward uRNN. It would be informative to see how low-rank RNN performs using overall 6.5K parameters. Generally, it would be good to see what is the impact of the matrix rank given a fix state size.\n\n- It would be informative as well to have the baseline and the uRNN curve in Figure 2 for the memory/addition task.\n\n- it is not clear when to use low-rank or low-rank + diagonal from the experiments.\n\nOverall, the evaluation in its current form in not really convincing, except for the sequential MNIST dataset.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Low-rank passthrough neural networks
["Antonio Valerio Miceli Barone"]
Deep learning consists in training neural networks to perform computations that sequentially unfold in many steps over a time dimension or an intrinsic depth dimension. For large depths, this is usually accomplished by specialized network architectures that are designed to mitigate the vanishing gradient problem, e.g. LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are based on a single structural principle: the state passthrough. We observe that these "Passthrough Networks" architectures enable the decoupling of the network state size from the number of parameters of the network, a possibility that is exploited in some recent works but not thoroughly explored. In this work we propose simple, yet effective, low-rank and low-rank plus diagonal matrix parametrizations for Passthrough Networks which exploit this decoupling property, reducing the data complexity and memory requirements of the network while preserving its memory capacity. We present competitive experimental results on several tasks, including a near state of the art result on sequential randomly-permuted MNIST classification, a hard task on natural data.
["Deep learning"]
https://openreview.net/forum?id=rkaRFYcgl
https://openreview.net/pdf?id=rkaRFYcgl
https://openreview.net/forum?id=rkaRFYcgl&noteId=BkVSLugNx
Under review as a conference paper at ICLR 2017LOW-RANK PASSTHROUGH NEURAL NETWORKSAntonio Valerio Miceli BaroneSchool of InformaticsThe University of Edinburghamiceli@inf.ed.ac.ukABSTRACTDeep learning consists in training neural networks to perform computations thatsequentially unfold in many steps over a time dimension or an intrinsic depthdimension. For large depths, this is usually accomplished by specialized networkarchitectures that are designed to mitigate the vanishing gradient problem, e.g.LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are basedon a single structural principle: the state passthrough. We observe that these"Passthrough Networks" architectures enable the decoupling of the network statesize from the number of parameters of the network, a possibility that is exploitedin some recent works but not thoroughly explored. In this work we propose simple,yet effective, low-rank and low-rank plus diagonal matrix parametrizations forPassthrough Networks which exploit this decoupling property, reducing the datacomplexity and memory requirements of the network while preserving its memorycapacity. We present competitive experimental results on several tasks, including anear state of the art result on sequential randomly-permuted MNIST classification,a hard task on natural data.1 O VERVIEWDeep neural networks can perform non-trivial computations by the repeated the application ofparametric non-linear transformation layers to vectorial (or, more generally, tensorial) data. Thisstaging of many computation steps can be done over a time dimension for tasks involving sequentialinputs or outputs of varying length, yielding a recurrent neural network , or over an intrinsic circuitdepth dimension, yielding a deep feed-forward neural network , or both. Training these deep modelsis complicated by the exploding andvanishing gradient problems (Hochreiter, 1991; Bengio et al.,1994).Various network architectures have been proposed to ameliorate the vanishing gradient problem inthe recurrent setting, such as the LSTM (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber,2005), the GRU (Cho et al., 2014b), etc. These architectures led to a number of breakthroughsin different tasks in NLP, computer vision, etc. (Graves et al., 2013; Cho et al., 2014a; Bahdanauet al., 2014; Vinyals et al., 2014; Iyyer et al., 2014). Similar methods have also been applied in thefeed-forward setting with architectures such as Highway Networks (Srivastava et al., 2015), DeepResidual Networks (He et al., 2015), and so on. All these architectures are based on a single structuralprinciple which, in this work, we will refer to as the state passthrough . We will thus refer to thesearchitectures as Passthrough Networks .Another difficulty in training neural networks is the trade-off between the network representationpower and its number of trainable parameters, which affects its data complexity during training inaddition to its implementation memory requirements. On one hand, the number of parameters can bethought as the number of tunable "knobs" that need to be set to represent a function, on the otherhand, it also constrains the size of the partial results that are propagated inside the network. In typicalfully connected networks, a layer acting on a n-dimensional state vector has O(n2)parameters storedin one or more matrices, but there can be many functions of practical interest that are simple enoughto be represented by a relatively small number of bits while still requiring some sizable amount ofmemory to be computed. Therefore, representing these functions on a fully connected neural networkWork partially done while affiliated with University of Pisa.1Under review as a conference paper at ICLR 2017can be wasteful in terms of number of parameters. The full parameterization implies that, at each step,all the information in each state component can affect all the information in any state component atthe next step. Classical physical systems, however, consist of spatially separated parts with primarilylocal interactions, long-distance interactions are possible but they tend to be limited by propagationdelays, bandwidth and noise. Therefore it may be beneficial to bias our model class towards modelsthat tend to adhere to these physical constraints by using a parametrization which reduces the numberof parameters required to represent them. This can be accomplished by imposing some constraintson thennmatrices that parametrize the state transitions. One way of doing this is to imposea convolutional structure on these matrices (LeCun et al., 2004; Krizhevsky et al., 2012), whichcorresponds to strict locality and periodicity constraints as in a cellular automaton. These constraintswork well in certain domains such as vision, but may be overly restrictive in other domains.In this work we observe that the state passthrough allows for a systematic decoupling of the networkstate size from the number of parameters: since by default the state vector passes mostly unalteredthrough the layers, each layer can be made simple enough to be described only by a small number ofparameters without affecting the overall memory capacity of the network, effectively spreading thecomputation over the depth or time dimension of the network, but without making the network "thin".This has been exploited by some convolutional passthrough architectures (Srivastava et al., 2015; Heet al., 2015; Kaiser & Sutskever, 2015), or architectures with addressable read-write memory (Graveset al., 2014; Danihelka et al., 2016).In this work we propose simple but effective low-dimensional parametrizations that exploit thisdecoupling based on low-rank or low-rank plus diagonal matrix decompositions. Our approachextends the LSTM architecture with a single projection layer by Sak et al. (2014) which has beenapplied to speech recognition, natural language modeling (Józefowicz et al., 2016), video analysis(Sun et al., 2015), etc. We provide experimental evaluation of our approach on GRU and LSTMarchitectures on various machine learning tasks, including a near state of the art result for the hardtask of sequential randomly-permuted MNIST image recognition (Le et al., 2015).2 M ODELA neural network can be described as a dynamical system that transforms an input uinto an output yover multiple time steps T. At each step tthe network has a n-dimensional state vector x(t)2Rndefined asx(t) =in(u;) ift= 0f(x(t1);t;u; )ift1(1)whereinis astate initialization function ,fis astate transition function and2Rkis vector oftrainable parameters. The output y=out(x(0 :T);)is generated by an output function out, wherex(0 :T)denotes the whole sequence of states visited during the execution. In a feed-forward neuralnetwork with constant hidden layer width n, the inputu2Rmand the output y2Rlare vectors offixed dimension mandlrespectively, Tis a model hyperparameter. In a recurrent neural networkthe inputuis typically a list of T m -dimensional vectors u(t)2Rmfort21;:::;T whereTis variable, the output yis either a single l-dimensional vector or a list of Tsuch vectors. Otherneural architectures, such as "seq2seq" transducers without attention (Cho et al., 2014a), can be alsodescribed within this framework.2.1 P ASSTHROUGH NETWORKSPassthrough networks can be defined as networks where the state transition function fhas a specialform such that, at each step tthe state vector x(t)(or a sub-vector ^x(t)) is propagated to the next stepmodified only by some (nearly) linear, element-wise transformation.Let the state vector x(t)(^x(t);~x(t))be the concatenation of ^x(t)2R^nand~x(t)2R~nwith^n+ ~n=n(where ~ncan be equal to zero). We define a network to have a state passthrough on^xif^xevolves as^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; ) (2)wherefis the next state proposal function ,fis the transform function ,fis the carry function anddenotes element-wise vector multiplication. The rest of the state vector ~x(t), if present, evolves2Under review as a conference paper at ICLR 2017^x(t−1)fγfτfπ+^x(t)xWa)xRb)LxRc)L00D+Figure 1: Left: Generic state passthrough hidden layer, optional non-passthrough state ~x(t)and per-timestep input u(t)are not shown. Right: a) Full matrix parametrization. b) Low-rank parametrization.c) Low-rank plus diagonal parametrization.according to some other function ~f. In practice ~x(t)is only used in LSTM variants, while in otherpassthrough architectures ^x(t) =x(t).As concrete example, we can describe a fully connected Highway Network asf(x(t1);t;u; ) =g((W)tx(t1) +(b)t)f(x(t1);t;u; ) =((W)tx(t1) +(b)t)f(x(t1);t;u; ) = 1nf(x(t1);t;u; )(3)wheregis an element-wise activation function, usually the ReLU (Glorot et al., 2011) or thehyperbolic tangent, is the element-wise logistic sigmoid, and 8t21;:::;T , the parameters (W)tand(W)t are matrices inRnnand(b)tand(b)tare vectors inRn. Dependence on the input uoccurs only through the initialization function, which is model-specific and is omitted here, as is theoutput function.2.2 L OW-RANK PASSTHROUGH NETWORKSIn fully connected architectures there are nnmatrices that act on the state vector, such as the(W)t and(W)t matrices of the Highway Network of eq. 3. Each of these matrices has n2entries,thus for large n, the entries of these matrices can make up the majority of independently trainableparameters of the model. As discussed in the previous section, this parametrization can be wasteful.We impose a low-rank constraint on these matrices. This is easily accomplished by rewriting each ofthese matrices as the product of two matrices where the inner dimension dis a model hyperparameter.For instance, in the case of the Highway Network of eq. 3 we can redefine 8t21;:::;T(W)t =(L)t(R)t(W)t =(L)t(R)t(4)where(L)t;(L)t2Rndand(R)t;(R)t2Rdn. Whend<n= 2this result in a reduction ofthe number of trainable parameters of the model.Even whenn=2d<n , while the total number of parameter increases, the number of degrees offreedom of the model still decreases, because low-rank factorization are unique only up to arbitraryddinvertible matrices, thus the number of independent degrees of freedom of a low-rank layer is3Under review as a conference paper at ICLR 20172ndd2. However, we don’t know whether the training optimizers can exploit this kind of redundancy,thus in this work we restrict to low-rank parametrizations where the number of parameters is strictlyreduced.This low-rank constraint can be thought as a bandwidth constraint on the computation performed ateach step: the Rmatrices first project the state into a smaller subspace, extracting the informationneeded for that specific step, then the Lmatrices project it back to the original state space, spreadingthe selected information to all the state components that need to be updated. A similar approach hasbeen proposed for the LSTM architecture by Sak et al. (2014), although they force the Rmatrices tobe the same for all the functions of the state transition, while we allow each parameter matrix to beparametrized independently by a pair of RandLmatrices.Low-rank passthrough architectures are universal in that they retain the same representation classesof their parent architectures. This is trivially true if the inner dimension dis allowed to be O(n)inthe worst case, and for some architectures even if dis held constant. For instance, it is easily shownthat for any Highway Network with state size nandThidden layers and for any >0, there exist aLow-rank Highway Network with d= 1, state size at most 2nand at mostnTlayers that computesthe same function within an margin of error.2.3 L OW-RANK PLUS DIAGONAL PASSTHROUGH NETWORKSAs we show in the experimental section, on some tasks the low-rank constraint may prove to beexcessively restrictive if the goal is to train a model with fewer parameters than one with arbitrarymatrices. A simple extension is to add to each low-rank parameter matrix a diagonal parametermatrix, yielding a matrix that is full-rank but still parametrized in a low-dimensional space. Forinstance, for the Highway Network architecture we modify eq. 4 to(W)t =(L)t(R)t+(D)t(W)t =(L)t(R)t+(D)t(5)where(D)t;(D)t2Rnnare trainable diagonal parameter matrices.It may seem that adding diagonal parameter matrices is redundant in passthrough networks. After all,the state passthrough itself can be considered as a diagonal matrix applied to the state vector, whichis then additively combined to the new proposed state computed by the ffunction. However, sincethe state passthrough completely skips over all non-linear activation functions, these formulationsare not equivalent. In particular, the low-rank plus diagonal parametrization may help in recurrentneural networks which receive input at each time step, since they allow each component of the statevector to directly control how much input signal is inserted into it at each step. We demonstratethe effectiveness of this model in the sequence copy and sequential MNIST tasks described in theexperiments section.3 E XPERIMENTSThe main content of this section reports several experiments on Low-rank and Low-rank plus diagonalGRUs, and an experiment using these parametrizations on a LSTM for language modeling.A preliminary experiment on Low-rank Highway Networks on the MNIST dataset is reported inappendix A.1.We applied the Low-rank and Low-rank plus diagonal GRU architectures to a subset of sequentialbenchmarks described in the Unitary Evolution Recurrent Neural Networks article by Arjovsky et al.(2015), specifically the memory task, the addition task and the sequential randomly permuted MNISTtask. For the memory tasks, we also considered two different variants proposed by Danihelka et al.(2016) and Henaff et al. (2016) which are hard for the uRNN architecture. We chose to compareagainst the uRNN architecture because it set state of the art results in terms of both data complexityand accuracy and because it is an architecture with similar design objectives as low-rank passthrougharchitectures, namely a low-dimensional parametrization and the mitigation of the vanishing gradientproblem, but it is based on quite different principles.4Under review as a conference paper at ICLR 2017The GRU architecture (Cho et al., 2014b) is a passthrough recurrent neural network defined asin(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) +(W)(x(t1)f!(x(t1);t;u; )) +(b))(6)We turn this architecture into the Low-rank GRU architecture by redefining each of the Wmatricesas the product of two matrices with inner dimension d. For the memory tasks, which turned out to bedifficult for the low-rank parametrization, we also consider the low-rank plus diagonal parametrization.We also applied the low-rank plus diagonal parametrization in the sequential permuted MNIST taskand a character-level language modeling task on the Penn Treebank corpus. For the languagemodeling task, we also experimented with Low-rank plus diagonal LSTMs. Refer to appendix A.2for model details.3.0.1 M EMORY TASKThe input of an instance of this task is a sequence of T=N+ 20 discrete symbols in a ten symbolalphabetai:i20;:::9, encoded as one-hot vectors. The first 10symbols in the sequence are "data"symbols i.i.d. sampled from a0;:::;a 7, followed by N1"blank"a8symbols, then a distinguished"run" symbol a9, followed by 10more "blank" a8symbols. The desired output sequence consistsofN+ 10 "blank"a8symbols followed by the 10"data" symbols as they appeared in the inputsequence. Therefore the model has to remember the 10"data" symbol string over the temporal gap ofsizeN, which is challenging for a recurrent neural network when Nis large. In our experiment wesetN= 500 , which is the hardest setting explored in the uRNN work. The training set consists of100;000training examples and 10;000validation/test examples. The architecture is described by eq.(6), with an additional output layer with a dense n10matrix followed a (biased) softmax. We trainto minimize the cross-entropy loss.We were able to solve this task using a GRU with full recurrent matrices with state size n= 128 ,learning rate 1103, mini-batch size 20, initial bias of the carry functions (the "update" gates)4:0, however this model has many more parameters, nearly 50;000in the recurrent layer only, thanthe uRNN work which has about 6;500, and it converges much more slowly than the uRNN. Wewere not able to achieve convergence with a pure low-rank model without exceeding the numberof parameters of the fully connected model, but we achieved fast convergence with a low-rank plusdiagonal model with d= 50 , with other hyperparameters set as above. This model has still moreparameters ( 39;168in the recurrent layer, 41;738total) than the uRNN model and converges moreslowly but still reasonably fast, reaching test cross-entropy <1103nats and almost perfectclassification accuracy in less than 35;000updates.In order to obtain a fair comparison, we also train a uRNN model with state size n= 721 , resultingin approximately the same number of parameters as the low-rank plus diagonal GRU models. Thismodel very quickly reaches perfect accuracy on the training set in less than 2;000updates, but overfitsw.r.t. the test set.We also consider two variants of this task which are difficult for the uRNN model. For both thesetasks we used the same settings as above except that the task size parameter is set at N= 100 forconsistency with the works that introduced these variants. In the variant of Danihelka et al. (2016), thelength of the sequence to be remembered is randomly sampled between 1and10for each sequence.They manage to achieve fast convergence with their Associative LSTM architecture with 65;505parameters, and slower convergence with standard LSTM models. Our low-rank plus diagonal GRUarchitecture, which has less parameters than their Associative LSTM, performs comparably or better,reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than30;000updates. In the variant of Henaff et al. (2016), the length of the sequence to be rememberedis fixed at 10but the model is expected to copy it after a variable number of time steps randomlychosen, for each sequence, between 1andN= 100 . The authors achieve slow convergence with astandard LSTM model, while our low-rank plus diagonal GRU architecture achieves fast convergence,5Under review as a conference paper at ICLR 20170 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Sequence copy with fixed lag N=500LRD-GRUURNN0 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Variable-length sequence copy with fixed lag N=100LRD-GRULRD-GRU-WNURNN0 100 200 300 400 500 600 700 800 900Minibatch number (hundreds)0.000.050.100.150.20Cross-entropy (nats)Sequence copy with variable lag N=100LRD-GRULRD-GRU-WNURNN0 20 40 60 80 100 120 140 160Minibatch number (hundreds)0.00.20.40.60.81.0Mean squared errorAddition T=7500 1000 2000 3000 4000 5000Minibatch number (hundreds)102030405060708090100Accuracy %Permuted sequential MNISTn=128, d=24n=512, d=40 2000 4000 6000 8000 10000Minibatch number (hundreds)020406080100Accuracy %Permuted sequential MNIST (low-rank plus diagonal)n=64, d=24n=128, d=24n=256, d=24n=128 (baseline)Figure 2: Top row and middle left: Low-rank plus diagonal GRU and uRNN on the sequence copytasks, cross-entropy on validation set. Middle right: Low-rank GRU on the addition task, meansquared error on validation set. Bottom row: Low-rank GRU (left) and Low-rank plus diagonal GRU(right) on the permuted sequential MNIST task, accuracy on validation set, horizontal line indicates90% accuracy.6Under review as a conference paper at ICLR 2017Table 1: Sequential permuted MNIST resultsArchitecture state size max rank params val. accuracy test accuracyBaseline GRU 128 - 51:0k 93:0% 92 :8%Low-rank GRU 128 24 20:2k 93:4% 91 :8%Low-rank GRU 512 4 19:5k 92:5% 91 :3%Low-rank plus diag. GRU 64 24 10:3k 93:1% 91 :9%Low-rank plus diag. GRU 128 24 20:6k 94:1% 93 :5%Low-rank plus diag. GRU 256 24 41:2k 95:1% 94:7%reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than38;000updates, and perfect test accuracy in 87;000updates.We further train uRNN models with state size n= 721 on these variants of the memory task. Wefound that the uRNN learns faster than the low-rank plus diagonal GRU on the variable length, fixedlag task (Danihelka et al., 2016) but fails to converge within our training time limit on the fixed length,variable lag task (Henaff et al., 2016).Training the low-rank plus diagonal GRU on these tasks incurs sometimes in numerical stabilityproblems as discussed in appendix A.2. In order to systemically address these issues, we also trainedmodels with weight normalization (Salimans & Kingma, 2016) and weight row max-norm constraints.These models turned out to be more stable and in fact converge faster, performing on par with theuRNN on the variable length, fixed lag task.Training curves are shown in figure 2 (top and middle left).3.0.2 A DDITION TASKFor each instance of this task, the input sequence has length Tand consists of two real-valuedcomponents, at each step the first component is independently sampled from the interval [0;1]withuniform probability, the second component is equal to zero everywhere except at two randomlychosen time step, one in each half of the sequence, where it is equal to one. The result is a single realvalue computed from the final state which we want to be equal to the sum of the two elements of thefirst component of the sequence at the positions where the second component was set at one. In ourexperiment we set T= 750 .The training set consists of 100;000training examples and 10;000validation/test examples. We usea Low-rank GRU with 2ninput matrix, n1output matrix and (biased) identity output activation.We train to minimize the mean squared error loss. We use state size n= 128 , maximum rank d= 24 .This results in approximately 6;140parameters in the recurrent hidden layer. Learning rate was set at1103, mini-batch size 20, initial bias of the carry functions (the "update" gates) was set to 4.We trained on 14;500mini-batches, obtaining a mean squared error on the test set of 0:003, which isa better result than the one reported in the uRNN article, in terms of training time and final accuracy.The training curve is shown in figure 2 (middle right).3.0.3 S EQUENTIAL MNIST TASKThis task consists of handwritten digit classification on the MNIST dataset with the caveat that theinput is presented to the model one pixel value at time, over T= 784 time steps. To further increasethe difficulty of the task, the inputs are reordered according to a random permutation (fixed for all thetask instances).We use Low-rank and Low-rank plus diagonal GRUs with 1ninput matrix, n10output matrixand (biased) softmax output activation. Learning rate was set at 5104, mini-batch size 20, initialbias of the carry functions (the "update" gates) was set to 5.Results are presented in table 1 and training curves are shown in figure 2 (bottom row). All thesemodels except the one with the most extreme bottleneck ( n= 512;d= 4) exceed the reported uRNNtest accuracy of 91:4%, although they converge more slowly (hundred of thousands updates vs. tensof thousands of the uRNN). Also note that the low-rank plus diagonal GRU is more accurate than the7Under review as a conference paper at ICLR 2017Table 2: Character-level language modeling resultsArchitecture dropout tied state size max rank params test per-char. perplexityBaseline GRU No - 1000 - 3:11M 2:96Baseline GRU Yes - 1000 - 3:11M 2:92Baseline GRU Yes - 3298 - 33:0M 2:77Baseline LSTM Yes - 1000 - 4:25M 2:92Low-rank plus diag. GRU No No 1000 64 0:49M 2:92Low-rank plus diag. GRU No No 3298 128 2:89M 2:95Low-rank plus diag. GRU Yes No 3298 128 2:89M 2:86Low-rank plus diag. GRU Yes No 5459 64 2:69M 2:82Low-rank plus diag. GRU Yes Yes 5459 64 1:99M 2:81Low-rank plus diag. GRU No Yes 1000 64 0:46M 2:90Low-rank plus diag. GRU Yes Yes 4480 128 2:78M 2:86Low-rank plus diag. GRU Yes Yes 6985 64 2:54M 2:76Low-rank plus diag. LSTM Yes No 1740 300 4:25M 2:86full rank GRU with the same state size, while the low-rank GRU is slightly less accurate (in terms oftest accuracy), indicating the utility of the diagonal component of the parametrization for this task.These are on par with more complex architectures with time-skip connections (Zhang et al., 2016)(reported test set accuracy 94:0%). To our knowledge, at the time of this writing, the best result onthis task is the LSTM with recurrent batch normalization by Cooijmans et al. (2016) (reported testset accuracy 95:2%). The architectural innovations of these works are orthogonal to our own and inprinciple they can be combined to it.3.0.4 C HARACTER -LEVEL LANGUAGE MODELING TASKThis standard benchmark task consist of predicting the probability of the next character in a sentenceafter having observed the previous charters. Similar to Zaremba et al. (2014), we use the PennTreebank English corpus, with standard training, validation and test splits.As a baseline we use a single layer GRU either with no regularization or regularized with Bayesianrecurrent dropout (Gal, 2015). Refer to appendix A.2 for details.In our experiments we consider the low-rank plus diagonal parametrization, both with tied and untiedprojection matrices. We set the state size and maximum rank to either reduce the total number ofparameters compared to the baselines or to keep the number of parameters approximately the samewhile increasing the memory capacity. Results are shown in table 2.Our low-rank plus diagonal parametrization reduces the model per-character perplexity (the base-2exponential of the bits-per-character entropy). Both the tied and untied versions perform equallywhen the state size is the same, but the tied version performs better when the number of parameters iskept the same, presumably due to the increased memory capacity of the state vector. Our best modelhas an extreme bottleneck, over a hundred of times smaller than the state size, while the word-levellanguage models trained by Józefowicz et al. (2016) use bottlenecks of four to eight times smallerthan the state size. We conjecture that this difference is due to our usage of the "plus diagonal"parametrization. In terms of absolute perplexity, our results are worse than published ones (e.g.Graves (2013)), although they may not be directly comparable since published results generally usedifferent training and evaluation schemes, such as preserving the network state between differentsentences.In order to address these experimental differences, we ran additional experiments using LSTMarchitectures, trying to replicate the alphabet and sentence segmentation used in Graves (2013),although we could not obtain the same baseline performance even using the Adam optimizer (usingSGD+momentum yields even worse results). In fact, we obtained approximately the same perplexityas our baseline GRU model with the same state size.8Under review as a conference paper at ICLR 2017We applied the Low-rank plus diagonal parametrizations to our LSTM architecture maintaining thesame number of parameters as the baseline. We obtained notable perplexity improvements over thebaseline. Refer to appendix A.3 for the experimental details.We performed additional exploratory experiments on word-level language modeling and subword-level neural machine translation (Bahdanau et al., 2014; Sennrich et al., 2015) with GRU-basedarchitectures but we were not able to achieve significant accuracy improvements, which is not particu-larly surprising given that in these models most parameters are contained in the token embedding andoutput matrices, thus low-dimensional parametrizations of the recurrent matrices have little effecton the total number of parameters. We reserve experimentation on character-level neural machinetranslation (Ling et al., 2015; Chung et al., 2016; Lee et al., 2016) to future work.4 C ONCLUSIONS AND FUTURE WORKWe proposed low-dimensional parametrizations for passthrough neural networks based on low-rankor low-rank plus diagonal decompositions of the nnmatrices that occur in the hidden layers.We experimentally compared our models with state of the art models, obtaining competitive resultsincluding a near state of the art for the randomly-permuted sequential MNIST task.Our parametrizations are alternative to convolutional parametrizations explored by Srivastava et al.(2015); He et al. (2015); Kaiser & Sutskever (2015). Since our architectural innovations are orthogonalto these approaches, they can be in principle combined. Additionally, alternative parametrizationscould include non-linear activation functions, similar to the network-in-network approach of Lin et al.(2013). We leave the exploration of these extensions to future work.REFERENCESArjovsky, Martin, Shah, Amar, and Bengio, Yoshua. Unitary evolution recurrent neural networks. CoRR ,abs/1511.06464, 2015. URL http://arxiv.org/abs/1511.06464 .Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning toalign and translate. CoRR , abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 .Bengio, Yoshua, Simard, Patrice, and Frasconi, Paolo. Learning long-term dependencies with gradient descentis difficult. Neural Networks, IEEE Transactions on , 5(2):157–166, 1994.Cho, Kyunghyun, van Merriënboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the properties of neuralmachine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 , 2014a.Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio,Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXivpreprint arXiv:1406.1078 , 2014b.Chung, Junyoung, Cho, Kyunghyun, and Bengio, Yoshua. A character-level decoder without explicit segmenta-tion for neural machine translation. arXiv preprint arXiv:1603.06147 , 2016.Cooijmans, T., Ballas, N., Laurent, C., Gülçehre, Ç., and Courville, A. Recurrent Batch Normalization. ArXive-prints , March 2016.Danihelka, I., Wayne, G., Uria, B., Kalchbrenner, N., and Graves, A. Associative Long Short-Term Memory.ArXiv e-prints , February 2016.Gal, Yarin. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprintarXiv:1512.05287 , 2015.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In InternationalConference on Artificial Intelligence and Statistics , pp. 315–323, 2011.Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 , 2013.Graves, Alex and Schmidhuber, Jürgen. Framewise phoneme classification with bidirectional lstm and otherneural network architectures. Neural Networks , 18(5):602–610, 2005.Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey E. Speech recognition with deep recurrent neuralnetworks. CoRR , abs/1303.5778, 2013. URL http://arxiv.org/abs/1303.5778 .9Under review as a conference paper at ICLR 2017Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401 , 2014.He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition.arXiv preprint arXiv:1512.03385 , 2015.Henaff, M., Szlam, A., and LeCun, Y . Orthogonal RNNs and Long-Memory Tasks. ArXiv e-prints , February2016.Hochreiter, Sepp. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische UniversitätMünchen , 1991.Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daumé III, Hal. A neural networkfor factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methodsin Natural Language Processing (EMNLP) , pp. 633–644, 2014.Józefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits oflanguage modeling. arXiv preprint arXiv:1602.02410 , 2016.Kaiser, Lukasz and Sutskever, Ilya. Neural gpus learn algorithms. CoRR , abs/1511.08228, 2015. URLhttp://arxiv.org/abs/1511.08228 .Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutionalneural networks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Le, Quoc V , Jaitly, Navdeep, and Hinton, Geoffrey E. A simple way to initialize recurrent networks of rectifiedlinear units. arXiv preprint arXiv:1504.00941 , 2015.LeCun, Yann, Huang, Fu Jie, and Bottou, Leon. Learning methods for generic object recognition with invarianceto pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the2004 IEEE Computer Society Conference on , volume 2, pp. II–97. IEEE, 2004.Lee, Jason, Cho, Kyunghyun, and Hofmann, Thomas. Fully character-level neural machine translation withoutexplicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400 , 2013.Ling, Wang, Trancoso, Isabel, Dyer, Chris, and Black, Alan W. Character-based neural machine translation.arXiv preprint arXiv:1511.04586 , 2015.Sak, Hasim, Senior, Andrew W, and Beaufays, Françoise. Long short-term memory recurrent neural networkarchitectures for large scale acoustic modeling. In INTERSPEECH , pp. 338–342, 2014.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to acceleratetraining of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Neural machine translation of rare words with subwordunits. arXiv preprint arXiv:1508.07909 , 2015.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Edinburgh neural machine translation systems for wmt16.arXiv preprint arXiv:1606.02891 , 2016.Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: Asimple way to prevent neural networks from overfitting. The Journal of Machine Learning Research , 15(1):1929–1958, 2014.Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, Jürgen. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Sun, Chen, Shetty, Sanketh, Sukthankar, Rahul, and Nevatia, Ram. Temporal localization of fine-grained actionsin videos by domain transfer from web images. In Proceedings of the 23rd Annual ACM Conference onMultimedia Conference , pp. 371–380. ACM, 2015.10Under review as a conference paper at ICLR 2017Tang, Yichuan. Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 , 2013.Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5 - rmsprop„ 2012.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Grammar as aforeign language. arXiv preprint arXiv:1412.7449 , 2014.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprintarXiv:1409.2329 , 2014.Zhang, Saizheng, Wu, Yuhuai, Che, Tong, Lin, Zhouhan, Memisevic, Roland, Salakhutdinov, Ruslan, and Bengio,Yoshua. Architectural complexity measures of recurrent neural networks. arXiv preprint arXiv:1602.08210 ,2016.A A PPENDIX : EXPERIMENTAL DETAILSA.1 L OW-RANK HIGHWAY NETWORKSAs a preliminary exploratory experiment, we applied the low-rank and low-rank plus diagonalHighway Network architecture to the classic benchmark task of handwritten digit classification onthe MNIST dataset, in its permutation-invariant (i.e. non-convolutional) variant.We used the low-rank architecture described by equations 3 and 4, with T= 5hidden layers, ReLUactivation function, state dimension n= 1024 and maximum rank (internal dimension) d= 256 .The input-to-state layer is a dense 7841024 matrix followed by a (biased) ReLU activation andthe state-to-output layer is a dense 102410matrix followed by a (biased) identity activation. Wedid not use any convolution layer, pooling layer or data augmentation technique. We used dropout(Srivastava et al., 2014) in order to achieve regularization. We further applied L2-regularization withcoefficient= 1103per example on the hidden-to-output parameter matrix. We also used batchnormalization (Ioffe & Szegedy, 2015) after the input-to-state matrix and after each parameter matrixin the hidden layers. Initial bias vectors are all initialized at zero except for those of the transformfunctions in the hidden layers, which are initialized at 1:0. We trained to minimize the sum of theper-class L2-hinge loss plus the L2-regularization cost (Tang, 2013). Optimization was performedusing Adam (Kingma & Ba, 2014) with standard hyperparameters, learning rate starting at 3103halving every three epochs without validation improvements. Mini-batch size was equal to 100. Codeis available online1.We obtained perfect training accuracy and 98:83% test accuracy. While this result does not reachthe state of the art for this task ( 99:13% test accuracy with unsupervised dimensionality reductionreported by Tang (2013)), it is still relatively close. We also tested the low-rank plus diagonalHighway Network architecture of eq. 5 with the same settings as above, obtaining a test accuracy of98:64%. The inclusion of diagonal parameter matrices does not seem to help in this particular task.A.2 L OW-RANK GRU SIn our experiments (except language modeling) we optimized using RMSProp (Tieleman & Hinton,2012) with gradient component clipping at 1. Code is available online2. Our code is based on thepublished uRNN code3(specifically, on the LSTM implementation) by the original authors for thesake of a fair comparison. In order to achieve convergence on the memory task however, we had toslightly modify the optimization procedure, specifically we changed gradient component clippingwith gradient norm clipping (with NaN detection and recovery), and we added a small = 1108term in the parameter update formula. No modifications of the original optimizer implementationwere required for the other tasks.In order to address the numerical instability issues in the memory tasks, we also consider a variantof our Low-rank plus diagonal GRU where apply weight normalization as described by Salimans &Kingma (2016) to all the parameter matrices except the output one and the diagonal matrices. All1https://github.com/Avmb/lowrank-highwaynetwork2https://github.com/Avmb/lowrank-gru3https://github.com/amarshah/complex_RNN11Under review as a conference paper at ICLR 2017these matrices have trainable scale parameters, except for the projection matrices. We further apply anhard constraint on the matrices row norms by clipping them at 10after each update. We disable NaNdetection and recovery during training. The rationale behind this approach, in addition to the generalbenefits of normalization, is that the low-rank parametrization potentially introduces stability issuesbecause the model is invariant to multiplying a row of an R-matrix by a scalar sand dividing thecorresponding column of the L-matrix bys, which in principle allows the parameters of either matrixto grow very large in magnitude, eventually resulting in overflows or other pathological behavior.The weight row max-norm constraint can counter this problem. But the constraint alone could makethe optimization problem harder by reducing and distorting the parameter space. Fortunately wecould counter this by weight normalization which makes the model invariant to the row-norms of theparameter matrices.In the language modeling experiment, for consistency with existing code, we used a variant of theGRU where the reset gate is applied after the multiplication by the recurrent proposal matrix ratherthan before. Specifically:in(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) + ((W)x(t1))f!(x(t1);t;u; ) +(b))(7)The character vocabulary size if 51, we use no character embeddings. Training is performed withAdam with learning rate 1103. Bayesian recurrent dropout was adapted from the original LSTMarchitecture of Gal (2015) to the GRU architecture as in Sennrich et al. (2016).Our implementation is based on the "dl4mt" tutorial4and the Nematus neural machine translationsystem5. The code is available online6.A.3 L OW-RANK LSTM SFor our LSTM experiments, we modified the implementation of LSTM language model with Bayesianrecurrent dropout by Gal (2015)7. In order to match the setup of Graves (2013) more closely, weused a vocabulary size of 49, no embedding layer and one LSTM layer. We found no difference onthe baseline model with using peephole connections and not using them, therefore we did not usethem on the Low-rank plus diagonal model. We use recurrent dropout and the Adam optimizer withlearning rate 2104.The baseline LSTM model is defined by the gates:in(u;) = 0^nf!(x(t1);t;u; ) =(U!u(t) +(W!)~x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =tanh(Uu(t) +(W)~x(t1) +(b))(8)with the state components evolving as:^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; )~x(t) =f!(x(t1);t;u; )tanh(^x(t))(9)The low-rank plus diagonal parametrization is applied on the recurrence matrices W?as in the GRUmodels.The code is available online8.4https://github.com/nyu-dl/dl4mt-tutorial5https://github.com/rsennrich/nematus6https://github.com/Avmb/dl4mt-lm/tree/master/lm7https://github.com/yaringal/BayesianRNN8https://github.com/Avmb/lowrank-lstm12
Hkcx-LeVe
rkaRFYcgl
ICLR.cc/2017/conference/-/paper514/official/review
{"title": "my review", "rating": "5: Marginally below acceptance threshold", "review": "The paper proposes a low-rank version of pass-through networks to better control capacity, which can be useful in some cases, as shown in the experiments.\nThat said, I found the results not very convincing overall. Results are overall not as good as state-of-the-art on sequential MNIST or the memory task, but add one more hyper-parameter to tune. As I said, it would help to show in Tables and/or Figures competing approaches like uRNNs."}
review
2017
ICLR.cc/2017/conference
Low-rank passthrough neural networks
["Antonio Valerio Miceli Barone"]
Deep learning consists in training neural networks to perform computations that sequentially unfold in many steps over a time dimension or an intrinsic depth dimension. For large depths, this is usually accomplished by specialized network architectures that are designed to mitigate the vanishing gradient problem, e.g. LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are based on a single structural principle: the state passthrough. We observe that these "Passthrough Networks" architectures enable the decoupling of the network state size from the number of parameters of the network, a possibility that is exploited in some recent works but not thoroughly explored. In this work we propose simple, yet effective, low-rank and low-rank plus diagonal matrix parametrizations for Passthrough Networks which exploit this decoupling property, reducing the data complexity and memory requirements of the network while preserving its memory capacity. We present competitive experimental results on several tasks, including a near state of the art result on sequential randomly-permuted MNIST classification, a hard task on natural data.
["Deep learning"]
https://openreview.net/forum?id=rkaRFYcgl
https://openreview.net/pdf?id=rkaRFYcgl
https://openreview.net/forum?id=rkaRFYcgl&noteId=Hkcx-LeVe
Under review as a conference paper at ICLR 2017LOW-RANK PASSTHROUGH NEURAL NETWORKSAntonio Valerio Miceli BaroneSchool of InformaticsThe University of Edinburghamiceli@inf.ed.ac.ukABSTRACTDeep learning consists in training neural networks to perform computations thatsequentially unfold in many steps over a time dimension or an intrinsic depthdimension. For large depths, this is usually accomplished by specialized networkarchitectures that are designed to mitigate the vanishing gradient problem, e.g.LSTMs, GRUs, Highway Networks and Deep Residual Networks, which are basedon a single structural principle: the state passthrough. We observe that these"Passthrough Networks" architectures enable the decoupling of the network statesize from the number of parameters of the network, a possibility that is exploitedin some recent works but not thoroughly explored. In this work we propose simple,yet effective, low-rank and low-rank plus diagonal matrix parametrizations forPassthrough Networks which exploit this decoupling property, reducing the datacomplexity and memory requirements of the network while preserving its memorycapacity. We present competitive experimental results on several tasks, including anear state of the art result on sequential randomly-permuted MNIST classification,a hard task on natural data.1 O VERVIEWDeep neural networks can perform non-trivial computations by the repeated the application ofparametric non-linear transformation layers to vectorial (or, more generally, tensorial) data. Thisstaging of many computation steps can be done over a time dimension for tasks involving sequentialinputs or outputs of varying length, yielding a recurrent neural network , or over an intrinsic circuitdepth dimension, yielding a deep feed-forward neural network , or both. Training these deep modelsis complicated by the exploding andvanishing gradient problems (Hochreiter, 1991; Bengio et al.,1994).Various network architectures have been proposed to ameliorate the vanishing gradient problem inthe recurrent setting, such as the LSTM (Hochreiter & Schmidhuber, 1997; Graves & Schmidhuber,2005), the GRU (Cho et al., 2014b), etc. These architectures led to a number of breakthroughsin different tasks in NLP, computer vision, etc. (Graves et al., 2013; Cho et al., 2014a; Bahdanauet al., 2014; Vinyals et al., 2014; Iyyer et al., 2014). Similar methods have also been applied in thefeed-forward setting with architectures such as Highway Networks (Srivastava et al., 2015), DeepResidual Networks (He et al., 2015), and so on. All these architectures are based on a single structuralprinciple which, in this work, we will refer to as the state passthrough . We will thus refer to thesearchitectures as Passthrough Networks .Another difficulty in training neural networks is the trade-off between the network representationpower and its number of trainable parameters, which affects its data complexity during training inaddition to its implementation memory requirements. On one hand, the number of parameters can bethought as the number of tunable "knobs" that need to be set to represent a function, on the otherhand, it also constrains the size of the partial results that are propagated inside the network. In typicalfully connected networks, a layer acting on a n-dimensional state vector has O(n2)parameters storedin one or more matrices, but there can be many functions of practical interest that are simple enoughto be represented by a relatively small number of bits while still requiring some sizable amount ofmemory to be computed. Therefore, representing these functions on a fully connected neural networkWork partially done while affiliated with University of Pisa.1Under review as a conference paper at ICLR 2017can be wasteful in terms of number of parameters. The full parameterization implies that, at each step,all the information in each state component can affect all the information in any state component atthe next step. Classical physical systems, however, consist of spatially separated parts with primarilylocal interactions, long-distance interactions are possible but they tend to be limited by propagationdelays, bandwidth and noise. Therefore it may be beneficial to bias our model class towards modelsthat tend to adhere to these physical constraints by using a parametrization which reduces the numberof parameters required to represent them. This can be accomplished by imposing some constraintson thennmatrices that parametrize the state transitions. One way of doing this is to imposea convolutional structure on these matrices (LeCun et al., 2004; Krizhevsky et al., 2012), whichcorresponds to strict locality and periodicity constraints as in a cellular automaton. These constraintswork well in certain domains such as vision, but may be overly restrictive in other domains.In this work we observe that the state passthrough allows for a systematic decoupling of the networkstate size from the number of parameters: since by default the state vector passes mostly unalteredthrough the layers, each layer can be made simple enough to be described only by a small number ofparameters without affecting the overall memory capacity of the network, effectively spreading thecomputation over the depth or time dimension of the network, but without making the network "thin".This has been exploited by some convolutional passthrough architectures (Srivastava et al., 2015; Heet al., 2015; Kaiser & Sutskever, 2015), or architectures with addressable read-write memory (Graveset al., 2014; Danihelka et al., 2016).In this work we propose simple but effective low-dimensional parametrizations that exploit thisdecoupling based on low-rank or low-rank plus diagonal matrix decompositions. Our approachextends the LSTM architecture with a single projection layer by Sak et al. (2014) which has beenapplied to speech recognition, natural language modeling (Józefowicz et al., 2016), video analysis(Sun et al., 2015), etc. We provide experimental evaluation of our approach on GRU and LSTMarchitectures on various machine learning tasks, including a near state of the art result for the hardtask of sequential randomly-permuted MNIST image recognition (Le et al., 2015).2 M ODELA neural network can be described as a dynamical system that transforms an input uinto an output yover multiple time steps T. At each step tthe network has a n-dimensional state vector x(t)2Rndefined asx(t) =in(u;) ift= 0f(x(t1);t;u; )ift1(1)whereinis astate initialization function ,fis astate transition function and2Rkis vector oftrainable parameters. The output y=out(x(0 :T);)is generated by an output function out, wherex(0 :T)denotes the whole sequence of states visited during the execution. In a feed-forward neuralnetwork with constant hidden layer width n, the inputu2Rmand the output y2Rlare vectors offixed dimension mandlrespectively, Tis a model hyperparameter. In a recurrent neural networkthe inputuis typically a list of T m -dimensional vectors u(t)2Rmfort21;:::;T whereTis variable, the output yis either a single l-dimensional vector or a list of Tsuch vectors. Otherneural architectures, such as "seq2seq" transducers without attention (Cho et al., 2014a), can be alsodescribed within this framework.2.1 P ASSTHROUGH NETWORKSPassthrough networks can be defined as networks where the state transition function fhas a specialform such that, at each step tthe state vector x(t)(or a sub-vector ^x(t)) is propagated to the next stepmodified only by some (nearly) linear, element-wise transformation.Let the state vector x(t)(^x(t);~x(t))be the concatenation of ^x(t)2R^nand~x(t)2R~nwith^n+ ~n=n(where ~ncan be equal to zero). We define a network to have a state passthrough on^xif^xevolves as^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; ) (2)wherefis the next state proposal function ,fis the transform function ,fis the carry function anddenotes element-wise vector multiplication. The rest of the state vector ~x(t), if present, evolves2Under review as a conference paper at ICLR 2017^x(t−1)fγfτfπ+^x(t)xWa)xRb)LxRc)L00D+Figure 1: Left: Generic state passthrough hidden layer, optional non-passthrough state ~x(t)and per-timestep input u(t)are not shown. Right: a) Full matrix parametrization. b) Low-rank parametrization.c) Low-rank plus diagonal parametrization.according to some other function ~f. In practice ~x(t)is only used in LSTM variants, while in otherpassthrough architectures ^x(t) =x(t).As concrete example, we can describe a fully connected Highway Network asf(x(t1);t;u; ) =g((W)tx(t1) +(b)t)f(x(t1);t;u; ) =((W)tx(t1) +(b)t)f(x(t1);t;u; ) = 1nf(x(t1);t;u; )(3)wheregis an element-wise activation function, usually the ReLU (Glorot et al., 2011) or thehyperbolic tangent, is the element-wise logistic sigmoid, and 8t21;:::;T , the parameters (W)tand(W)t are matrices inRnnand(b)tand(b)tare vectors inRn. Dependence on the input uoccurs only through the initialization function, which is model-specific and is omitted here, as is theoutput function.2.2 L OW-RANK PASSTHROUGH NETWORKSIn fully connected architectures there are nnmatrices that act on the state vector, such as the(W)t and(W)t matrices of the Highway Network of eq. 3. Each of these matrices has n2entries,thus for large n, the entries of these matrices can make up the majority of independently trainableparameters of the model. As discussed in the previous section, this parametrization can be wasteful.We impose a low-rank constraint on these matrices. This is easily accomplished by rewriting each ofthese matrices as the product of two matrices where the inner dimension dis a model hyperparameter.For instance, in the case of the Highway Network of eq. 3 we can redefine 8t21;:::;T(W)t =(L)t(R)t(W)t =(L)t(R)t(4)where(L)t;(L)t2Rndand(R)t;(R)t2Rdn. Whend<n= 2this result in a reduction ofthe number of trainable parameters of the model.Even whenn=2d<n , while the total number of parameter increases, the number of degrees offreedom of the model still decreases, because low-rank factorization are unique only up to arbitraryddinvertible matrices, thus the number of independent degrees of freedom of a low-rank layer is3Under review as a conference paper at ICLR 20172ndd2. However, we don’t know whether the training optimizers can exploit this kind of redundancy,thus in this work we restrict to low-rank parametrizations where the number of parameters is strictlyreduced.This low-rank constraint can be thought as a bandwidth constraint on the computation performed ateach step: the Rmatrices first project the state into a smaller subspace, extracting the informationneeded for that specific step, then the Lmatrices project it back to the original state space, spreadingthe selected information to all the state components that need to be updated. A similar approach hasbeen proposed for the LSTM architecture by Sak et al. (2014), although they force the Rmatrices tobe the same for all the functions of the state transition, while we allow each parameter matrix to beparametrized independently by a pair of RandLmatrices.Low-rank passthrough architectures are universal in that they retain the same representation classesof their parent architectures. This is trivially true if the inner dimension dis allowed to be O(n)inthe worst case, and for some architectures even if dis held constant. For instance, it is easily shownthat for any Highway Network with state size nandThidden layers and for any >0, there exist aLow-rank Highway Network with d= 1, state size at most 2nand at mostnTlayers that computesthe same function within an margin of error.2.3 L OW-RANK PLUS DIAGONAL PASSTHROUGH NETWORKSAs we show in the experimental section, on some tasks the low-rank constraint may prove to beexcessively restrictive if the goal is to train a model with fewer parameters than one with arbitrarymatrices. A simple extension is to add to each low-rank parameter matrix a diagonal parametermatrix, yielding a matrix that is full-rank but still parametrized in a low-dimensional space. Forinstance, for the Highway Network architecture we modify eq. 4 to(W)t =(L)t(R)t+(D)t(W)t =(L)t(R)t+(D)t(5)where(D)t;(D)t2Rnnare trainable diagonal parameter matrices.It may seem that adding diagonal parameter matrices is redundant in passthrough networks. After all,the state passthrough itself can be considered as a diagonal matrix applied to the state vector, whichis then additively combined to the new proposed state computed by the ffunction. However, sincethe state passthrough completely skips over all non-linear activation functions, these formulationsare not equivalent. In particular, the low-rank plus diagonal parametrization may help in recurrentneural networks which receive input at each time step, since they allow each component of the statevector to directly control how much input signal is inserted into it at each step. We demonstratethe effectiveness of this model in the sequence copy and sequential MNIST tasks described in theexperiments section.3 E XPERIMENTSThe main content of this section reports several experiments on Low-rank and Low-rank plus diagonalGRUs, and an experiment using these parametrizations on a LSTM for language modeling.A preliminary experiment on Low-rank Highway Networks on the MNIST dataset is reported inappendix A.1.We applied the Low-rank and Low-rank plus diagonal GRU architectures to a subset of sequentialbenchmarks described in the Unitary Evolution Recurrent Neural Networks article by Arjovsky et al.(2015), specifically the memory task, the addition task and the sequential randomly permuted MNISTtask. For the memory tasks, we also considered two different variants proposed by Danihelka et al.(2016) and Henaff et al. (2016) which are hard for the uRNN architecture. We chose to compareagainst the uRNN architecture because it set state of the art results in terms of both data complexityand accuracy and because it is an architecture with similar design objectives as low-rank passthrougharchitectures, namely a low-dimensional parametrization and the mitigation of the vanishing gradientproblem, but it is based on quite different principles.4Under review as a conference paper at ICLR 2017The GRU architecture (Cho et al., 2014b) is a passthrough recurrent neural network defined asin(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) +(W)(x(t1)f!(x(t1);t;u; )) +(b))(6)We turn this architecture into the Low-rank GRU architecture by redefining each of the Wmatricesas the product of two matrices with inner dimension d. For the memory tasks, which turned out to bedifficult for the low-rank parametrization, we also consider the low-rank plus diagonal parametrization.We also applied the low-rank plus diagonal parametrization in the sequential permuted MNIST taskand a character-level language modeling task on the Penn Treebank corpus. For the languagemodeling task, we also experimented with Low-rank plus diagonal LSTMs. Refer to appendix A.2for model details.3.0.1 M EMORY TASKThe input of an instance of this task is a sequence of T=N+ 20 discrete symbols in a ten symbolalphabetai:i20;:::9, encoded as one-hot vectors. The first 10symbols in the sequence are "data"symbols i.i.d. sampled from a0;:::;a 7, followed by N1"blank"a8symbols, then a distinguished"run" symbol a9, followed by 10more "blank" a8symbols. The desired output sequence consistsofN+ 10 "blank"a8symbols followed by the 10"data" symbols as they appeared in the inputsequence. Therefore the model has to remember the 10"data" symbol string over the temporal gap ofsizeN, which is challenging for a recurrent neural network when Nis large. In our experiment wesetN= 500 , which is the hardest setting explored in the uRNN work. The training set consists of100;000training examples and 10;000validation/test examples. The architecture is described by eq.(6), with an additional output layer with a dense n10matrix followed a (biased) softmax. We trainto minimize the cross-entropy loss.We were able to solve this task using a GRU with full recurrent matrices with state size n= 128 ,learning rate 1103, mini-batch size 20, initial bias of the carry functions (the "update" gates)4:0, however this model has many more parameters, nearly 50;000in the recurrent layer only, thanthe uRNN work which has about 6;500, and it converges much more slowly than the uRNN. Wewere not able to achieve convergence with a pure low-rank model without exceeding the numberof parameters of the fully connected model, but we achieved fast convergence with a low-rank plusdiagonal model with d= 50 , with other hyperparameters set as above. This model has still moreparameters ( 39;168in the recurrent layer, 41;738total) than the uRNN model and converges moreslowly but still reasonably fast, reaching test cross-entropy <1103nats and almost perfectclassification accuracy in less than 35;000updates.In order to obtain a fair comparison, we also train a uRNN model with state size n= 721 , resultingin approximately the same number of parameters as the low-rank plus diagonal GRU models. Thismodel very quickly reaches perfect accuracy on the training set in less than 2;000updates, but overfitsw.r.t. the test set.We also consider two variants of this task which are difficult for the uRNN model. For both thesetasks we used the same settings as above except that the task size parameter is set at N= 100 forconsistency with the works that introduced these variants. In the variant of Danihelka et al. (2016), thelength of the sequence to be remembered is randomly sampled between 1and10for each sequence.They manage to achieve fast convergence with their Associative LSTM architecture with 65;505parameters, and slower convergence with standard LSTM models. Our low-rank plus diagonal GRUarchitecture, which has less parameters than their Associative LSTM, performs comparably or better,reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than30;000updates. In the variant of Henaff et al. (2016), the length of the sequence to be rememberedis fixed at 10but the model is expected to copy it after a variable number of time steps randomlychosen, for each sequence, between 1andN= 100 . The authors achieve slow convergence with astandard LSTM model, while our low-rank plus diagonal GRU architecture achieves fast convergence,5Under review as a conference paper at ICLR 20170 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Sequence copy with fixed lag N=500LRD-GRUURNN0 100 200 300 400 500 600Minibatch number (hundreds)0.000.020.040.060.080.100.120.14Cross-entropy (nats)Variable-length sequence copy with fixed lag N=100LRD-GRULRD-GRU-WNURNN0 100 200 300 400 500 600 700 800 900Minibatch number (hundreds)0.000.050.100.150.20Cross-entropy (nats)Sequence copy with variable lag N=100LRD-GRULRD-GRU-WNURNN0 20 40 60 80 100 120 140 160Minibatch number (hundreds)0.00.20.40.60.81.0Mean squared errorAddition T=7500 1000 2000 3000 4000 5000Minibatch number (hundreds)102030405060708090100Accuracy %Permuted sequential MNISTn=128, d=24n=512, d=40 2000 4000 6000 8000 10000Minibatch number (hundreds)020406080100Accuracy %Permuted sequential MNIST (low-rank plus diagonal)n=64, d=24n=128, d=24n=256, d=24n=128 (baseline)Figure 2: Top row and middle left: Low-rank plus diagonal GRU and uRNN on the sequence copytasks, cross-entropy on validation set. Middle right: Low-rank GRU on the addition task, meansquared error on validation set. Bottom row: Low-rank GRU (left) and Low-rank plus diagonal GRU(right) on the permuted sequential MNIST task, accuracy on validation set, horizontal line indicates90% accuracy.6Under review as a conference paper at ICLR 2017Table 1: Sequential permuted MNIST resultsArchitecture state size max rank params val. accuracy test accuracyBaseline GRU 128 - 51:0k 93:0% 92 :8%Low-rank GRU 128 24 20:2k 93:4% 91 :8%Low-rank GRU 512 4 19:5k 92:5% 91 :3%Low-rank plus diag. GRU 64 24 10:3k 93:1% 91 :9%Low-rank plus diag. GRU 128 24 20:6k 94:1% 93 :5%Low-rank plus diag. GRU 256 24 41:2k 95:1% 94:7%reaching test cross-entropy <1103nats and almost perfect classification accuracy in less than38;000updates, and perfect test accuracy in 87;000updates.We further train uRNN models with state size n= 721 on these variants of the memory task. Wefound that the uRNN learns faster than the low-rank plus diagonal GRU on the variable length, fixedlag task (Danihelka et al., 2016) but fails to converge within our training time limit on the fixed length,variable lag task (Henaff et al., 2016).Training the low-rank plus diagonal GRU on these tasks incurs sometimes in numerical stabilityproblems as discussed in appendix A.2. In order to systemically address these issues, we also trainedmodels with weight normalization (Salimans & Kingma, 2016) and weight row max-norm constraints.These models turned out to be more stable and in fact converge faster, performing on par with theuRNN on the variable length, fixed lag task.Training curves are shown in figure 2 (top and middle left).3.0.2 A DDITION TASKFor each instance of this task, the input sequence has length Tand consists of two real-valuedcomponents, at each step the first component is independently sampled from the interval [0;1]withuniform probability, the second component is equal to zero everywhere except at two randomlychosen time step, one in each half of the sequence, where it is equal to one. The result is a single realvalue computed from the final state which we want to be equal to the sum of the two elements of thefirst component of the sequence at the positions where the second component was set at one. In ourexperiment we set T= 750 .The training set consists of 100;000training examples and 10;000validation/test examples. We usea Low-rank GRU with 2ninput matrix, n1output matrix and (biased) identity output activation.We train to minimize the mean squared error loss. We use state size n= 128 , maximum rank d= 24 .This results in approximately 6;140parameters in the recurrent hidden layer. Learning rate was set at1103, mini-batch size 20, initial bias of the carry functions (the "update" gates) was set to 4.We trained on 14;500mini-batches, obtaining a mean squared error on the test set of 0:003, which isa better result than the one reported in the uRNN article, in terms of training time and final accuracy.The training curve is shown in figure 2 (middle right).3.0.3 S EQUENTIAL MNIST TASKThis task consists of handwritten digit classification on the MNIST dataset with the caveat that theinput is presented to the model one pixel value at time, over T= 784 time steps. To further increasethe difficulty of the task, the inputs are reordered according to a random permutation (fixed for all thetask instances).We use Low-rank and Low-rank plus diagonal GRUs with 1ninput matrix, n10output matrixand (biased) softmax output activation. Learning rate was set at 5104, mini-batch size 20, initialbias of the carry functions (the "update" gates) was set to 5.Results are presented in table 1 and training curves are shown in figure 2 (bottom row). All thesemodels except the one with the most extreme bottleneck ( n= 512;d= 4) exceed the reported uRNNtest accuracy of 91:4%, although they converge more slowly (hundred of thousands updates vs. tensof thousands of the uRNN). Also note that the low-rank plus diagonal GRU is more accurate than the7Under review as a conference paper at ICLR 2017Table 2: Character-level language modeling resultsArchitecture dropout tied state size max rank params test per-char. perplexityBaseline GRU No - 1000 - 3:11M 2:96Baseline GRU Yes - 1000 - 3:11M 2:92Baseline GRU Yes - 3298 - 33:0M 2:77Baseline LSTM Yes - 1000 - 4:25M 2:92Low-rank plus diag. GRU No No 1000 64 0:49M 2:92Low-rank plus diag. GRU No No 3298 128 2:89M 2:95Low-rank plus diag. GRU Yes No 3298 128 2:89M 2:86Low-rank plus diag. GRU Yes No 5459 64 2:69M 2:82Low-rank plus diag. GRU Yes Yes 5459 64 1:99M 2:81Low-rank plus diag. GRU No Yes 1000 64 0:46M 2:90Low-rank plus diag. GRU Yes Yes 4480 128 2:78M 2:86Low-rank plus diag. GRU Yes Yes 6985 64 2:54M 2:76Low-rank plus diag. LSTM Yes No 1740 300 4:25M 2:86full rank GRU with the same state size, while the low-rank GRU is slightly less accurate (in terms oftest accuracy), indicating the utility of the diagonal component of the parametrization for this task.These are on par with more complex architectures with time-skip connections (Zhang et al., 2016)(reported test set accuracy 94:0%). To our knowledge, at the time of this writing, the best result onthis task is the LSTM with recurrent batch normalization by Cooijmans et al. (2016) (reported testset accuracy 95:2%). The architectural innovations of these works are orthogonal to our own and inprinciple they can be combined to it.3.0.4 C HARACTER -LEVEL LANGUAGE MODELING TASKThis standard benchmark task consist of predicting the probability of the next character in a sentenceafter having observed the previous charters. Similar to Zaremba et al. (2014), we use the PennTreebank English corpus, with standard training, validation and test splits.As a baseline we use a single layer GRU either with no regularization or regularized with Bayesianrecurrent dropout (Gal, 2015). Refer to appendix A.2 for details.In our experiments we consider the low-rank plus diagonal parametrization, both with tied and untiedprojection matrices. We set the state size and maximum rank to either reduce the total number ofparameters compared to the baselines or to keep the number of parameters approximately the samewhile increasing the memory capacity. Results are shown in table 2.Our low-rank plus diagonal parametrization reduces the model per-character perplexity (the base-2exponential of the bits-per-character entropy). Both the tied and untied versions perform equallywhen the state size is the same, but the tied version performs better when the number of parameters iskept the same, presumably due to the increased memory capacity of the state vector. Our best modelhas an extreme bottleneck, over a hundred of times smaller than the state size, while the word-levellanguage models trained by Józefowicz et al. (2016) use bottlenecks of four to eight times smallerthan the state size. We conjecture that this difference is due to our usage of the "plus diagonal"parametrization. In terms of absolute perplexity, our results are worse than published ones (e.g.Graves (2013)), although they may not be directly comparable since published results generally usedifferent training and evaluation schemes, such as preserving the network state between differentsentences.In order to address these experimental differences, we ran additional experiments using LSTMarchitectures, trying to replicate the alphabet and sentence segmentation used in Graves (2013),although we could not obtain the same baseline performance even using the Adam optimizer (usingSGD+momentum yields even worse results). In fact, we obtained approximately the same perplexityas our baseline GRU model with the same state size.8Under review as a conference paper at ICLR 2017We applied the Low-rank plus diagonal parametrizations to our LSTM architecture maintaining thesame number of parameters as the baseline. We obtained notable perplexity improvements over thebaseline. Refer to appendix A.3 for the experimental details.We performed additional exploratory experiments on word-level language modeling and subword-level neural machine translation (Bahdanau et al., 2014; Sennrich et al., 2015) with GRU-basedarchitectures but we were not able to achieve significant accuracy improvements, which is not particu-larly surprising given that in these models most parameters are contained in the token embedding andoutput matrices, thus low-dimensional parametrizations of the recurrent matrices have little effecton the total number of parameters. We reserve experimentation on character-level neural machinetranslation (Ling et al., 2015; Chung et al., 2016; Lee et al., 2016) to future work.4 C ONCLUSIONS AND FUTURE WORKWe proposed low-dimensional parametrizations for passthrough neural networks based on low-rankor low-rank plus diagonal decompositions of the nnmatrices that occur in the hidden layers.We experimentally compared our models with state of the art models, obtaining competitive resultsincluding a near state of the art for the randomly-permuted sequential MNIST task.Our parametrizations are alternative to convolutional parametrizations explored by Srivastava et al.(2015); He et al. (2015); Kaiser & Sutskever (2015). Since our architectural innovations are orthogonalto these approaches, they can be in principle combined. Additionally, alternative parametrizationscould include non-linear activation functions, similar to the network-in-network approach of Lin et al.(2013). We leave the exploration of these extensions to future work.REFERENCESArjovsky, Martin, Shah, Amar, and Bengio, Yoshua. Unitary evolution recurrent neural networks. CoRR ,abs/1511.06464, 2015. URL http://arxiv.org/abs/1511.06464 .Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning toalign and translate. CoRR , abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 .Bengio, Yoshua, Simard, Patrice, and Frasconi, Paolo. Learning long-term dependencies with gradient descentis difficult. Neural Networks, IEEE Transactions on , 5(2):157–166, 1994.Cho, Kyunghyun, van Merriënboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. On the properties of neuralmachine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 , 2014a.Cho, Kyunghyun, van Merrienboer, Bart, Gulcehre, Caglar, Bougares, Fethi, Schwenk, Holger, and Bengio,Yoshua. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXivpreprint arXiv:1406.1078 , 2014b.Chung, Junyoung, Cho, Kyunghyun, and Bengio, Yoshua. A character-level decoder without explicit segmenta-tion for neural machine translation. arXiv preprint arXiv:1603.06147 , 2016.Cooijmans, T., Ballas, N., Laurent, C., Gülçehre, Ç., and Courville, A. Recurrent Batch Normalization. ArXive-prints , March 2016.Danihelka, I., Wayne, G., Uria, B., Kalchbrenner, N., and Graves, A. Associative Long Short-Term Memory.ArXiv e-prints , February 2016.Gal, Yarin. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprintarXiv:1512.05287 , 2015.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In InternationalConference on Artificial Intelligence and Statistics , pp. 315–323, 2011.Graves, Alex. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 , 2013.Graves, Alex and Schmidhuber, Jürgen. Framewise phoneme classification with bidirectional lstm and otherneural network architectures. Neural Networks , 18(5):602–610, 2005.Graves, Alex, Mohamed, Abdel-rahman, and Hinton, Geoffrey E. Speech recognition with deep recurrent neuralnetworks. CoRR , abs/1303.5778, 2013. URL http://arxiv.org/abs/1303.5778 .9Under review as a conference paper at ICLR 2017Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401 , 2014.He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition.arXiv preprint arXiv:1512.03385 , 2015.Henaff, M., Szlam, A., and LeCun, Y . Orthogonal RNNs and Long-Memory Tasks. ArXiv e-prints , February2016.Hochreiter, Sepp. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische UniversitätMünchen , 1991.Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. arXiv preprint arXiv:1502.03167 , 2015.Iyyer, Mohit, Boyd-Graber, Jordan, Claudino, Leonardo, Socher, Richard, and Daumé III, Hal. A neural networkfor factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methodsin Natural Language Processing (EMNLP) , pp. 633–644, 2014.Józefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits oflanguage modeling. arXiv preprint arXiv:1602.02410 , 2016.Kaiser, Lukasz and Sutskever, Ilya. Neural gpus learn algorithms. CoRR , abs/1511.08228, 2015. URLhttp://arxiv.org/abs/1511.08228 .Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep convolutionalneural networks. In Advances in neural information processing systems , pp. 1097–1105, 2012.Le, Quoc V , Jaitly, Navdeep, and Hinton, Geoffrey E. A simple way to initialize recurrent networks of rectifiedlinear units. arXiv preprint arXiv:1504.00941 , 2015.LeCun, Yann, Huang, Fu Jie, and Bottou, Leon. Learning methods for generic object recognition with invarianceto pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the2004 IEEE Computer Society Conference on , volume 2, pp. II–97. IEEE, 2004.Lee, Jason, Cho, Kyunghyun, and Hofmann, Thomas. Fully character-level neural machine translation withoutexplicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400 , 2013.Ling, Wang, Trancoso, Isabel, Dyer, Chris, and Black, Alan W. Character-based neural machine translation.arXiv preprint arXiv:1511.04586 , 2015.Sak, Hasim, Senior, Andrew W, and Beaufays, Françoise. Long short-term memory recurrent neural networkarchitectures for large scale acoustic modeling. In INTERSPEECH , pp. 338–342, 2014.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to acceleratetraining of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Neural machine translation of rare words with subwordunits. arXiv preprint arXiv:1508.07909 , 2015.Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Edinburgh neural machine translation systems for wmt16.arXiv preprint arXiv:1606.02891 , 2016.Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: Asimple way to prevent neural networks from overfitting. The Journal of Machine Learning Research , 15(1):1929–1958, 2014.Srivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, Jürgen. Highway networks. arXiv preprintarXiv:1505.00387 , 2015.Sun, Chen, Shetty, Sanketh, Sukthankar, Rahul, and Nevatia, Ram. Temporal localization of fine-grained actionsin videos by domain transfer from web images. In Proceedings of the 23rd Annual ACM Conference onMultimedia Conference , pp. 371–380. ACM, 2015.10Under review as a conference paper at ICLR 2017Tang, Yichuan. Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 , 2013.Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5 - rmsprop„ 2012.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Grammar as aforeign language. arXiv preprint arXiv:1412.7449 , 2014.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization. arXiv preprintarXiv:1409.2329 , 2014.Zhang, Saizheng, Wu, Yuhuai, Che, Tong, Lin, Zhouhan, Memisevic, Roland, Salakhutdinov, Ruslan, and Bengio,Yoshua. Architectural complexity measures of recurrent neural networks. arXiv preprint arXiv:1602.08210 ,2016.A A PPENDIX : EXPERIMENTAL DETAILSA.1 L OW-RANK HIGHWAY NETWORKSAs a preliminary exploratory experiment, we applied the low-rank and low-rank plus diagonalHighway Network architecture to the classic benchmark task of handwritten digit classification onthe MNIST dataset, in its permutation-invariant (i.e. non-convolutional) variant.We used the low-rank architecture described by equations 3 and 4, with T= 5hidden layers, ReLUactivation function, state dimension n= 1024 and maximum rank (internal dimension) d= 256 .The input-to-state layer is a dense 7841024 matrix followed by a (biased) ReLU activation andthe state-to-output layer is a dense 102410matrix followed by a (biased) identity activation. Wedid not use any convolution layer, pooling layer or data augmentation technique. We used dropout(Srivastava et al., 2014) in order to achieve regularization. We further applied L2-regularization withcoefficient= 1103per example on the hidden-to-output parameter matrix. We also used batchnormalization (Ioffe & Szegedy, 2015) after the input-to-state matrix and after each parameter matrixin the hidden layers. Initial bias vectors are all initialized at zero except for those of the transformfunctions in the hidden layers, which are initialized at 1:0. We trained to minimize the sum of theper-class L2-hinge loss plus the L2-regularization cost (Tang, 2013). Optimization was performedusing Adam (Kingma & Ba, 2014) with standard hyperparameters, learning rate starting at 3103halving every three epochs without validation improvements. Mini-batch size was equal to 100. Codeis available online1.We obtained perfect training accuracy and 98:83% test accuracy. While this result does not reachthe state of the art for this task ( 99:13% test accuracy with unsupervised dimensionality reductionreported by Tang (2013)), it is still relatively close. We also tested the low-rank plus diagonalHighway Network architecture of eq. 5 with the same settings as above, obtaining a test accuracy of98:64%. The inclusion of diagonal parameter matrices does not seem to help in this particular task.A.2 L OW-RANK GRU SIn our experiments (except language modeling) we optimized using RMSProp (Tieleman & Hinton,2012) with gradient component clipping at 1. Code is available online2. Our code is based on thepublished uRNN code3(specifically, on the LSTM implementation) by the original authors for thesake of a fair comparison. In order to achieve convergence on the memory task however, we had toslightly modify the optimization procedure, specifically we changed gradient component clippingwith gradient norm clipping (with NaN detection and recovery), and we added a small = 1108term in the parameter update formula. No modifications of the original optimizer implementationwere required for the other tasks.In order to address the numerical instability issues in the memory tasks, we also consider a variantof our Low-rank plus diagonal GRU where apply weight normalization as described by Salimans &Kingma (2016) to all the parameter matrices except the output one and the diagonal matrices. All1https://github.com/Avmb/lowrank-highwaynetwork2https://github.com/Avmb/lowrank-gru3https://github.com/amarshah/complex_RNN11Under review as a conference paper at ICLR 2017these matrices have trainable scale parameters, except for the projection matrices. We further apply anhard constraint on the matrices row norms by clipping them at 10after each update. We disable NaNdetection and recovery during training. The rationale behind this approach, in addition to the generalbenefits of normalization, is that the low-rank parametrization potentially introduces stability issuesbecause the model is invariant to multiplying a row of an R-matrix by a scalar sand dividing thecorresponding column of the L-matrix bys, which in principle allows the parameters of either matrixto grow very large in magnitude, eventually resulting in overflows or other pathological behavior.The weight row max-norm constraint can counter this problem. But the constraint alone could makethe optimization problem harder by reducing and distorting the parameter space. Fortunately wecould counter this by weight normalization which makes the model invariant to the row-norms of theparameter matrices.In the language modeling experiment, for consistency with existing code, we used a variant of theGRU where the reset gate is applied after the multiplication by the recurrent proposal matrix ratherthan before. Specifically:in(u;) =inf!(x(t1);t;u; ) =(U!u(t) +(W!)x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)x(t1) +(b))f(x(t1);t;u; ) = 1nf(x(t1);t;u; )f(x(t1);t;u; ) =tanh(Uu(t) + ((W)x(t1))f!(x(t1);t;u; ) +(b))(7)The character vocabulary size if 51, we use no character embeddings. Training is performed withAdam with learning rate 1103. Bayesian recurrent dropout was adapted from the original LSTMarchitecture of Gal (2015) to the GRU architecture as in Sennrich et al. (2016).Our implementation is based on the "dl4mt" tutorial4and the Nematus neural machine translationsystem5. The code is available online6.A.3 L OW-RANK LSTM SFor our LSTM experiments, we modified the implementation of LSTM language model with Bayesianrecurrent dropout by Gal (2015)7. In order to match the setup of Graves (2013) more closely, weused a vocabulary size of 49, no embedding layer and one LSTM layer. We found no difference onthe baseline model with using peephole connections and not using them, therefore we did not usethem on the Low-rank plus diagonal model. We use recurrent dropout and the Adam optimizer withlearning rate 2104.The baseline LSTM model is defined by the gates:in(u;) = 0^nf!(x(t1);t;u; ) =(U!u(t) +(W!)~x(t1) +(b!))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =(Uu(t) +(W)~x(t1) +(b))f(x(t1);t;u; ) =tanh(Uu(t) +(W)~x(t1) +(b))(8)with the state components evolving as:^x(t) =f(x(t1);t;u; )f(x(t1);t;u; ) + ^x(t1)f(x(t1);t;u; )~x(t) =f!(x(t1);t;u; )tanh(^x(t))(9)The low-rank plus diagonal parametrization is applied on the recurrence matrices W?as in the GRUmodels.The code is available online8.4https://github.com/nyu-dl/dl4mt-tutorial5https://github.com/rsennrich/nematus6https://github.com/Avmb/dl4mt-lm/tree/master/lm7https://github.com/yaringal/BayesianRNN8https://github.com/Avmb/lowrank-lstm12
ByXsQisNe
B1akgy9xx
ICLR.cc/2017/conference/-/paper157/official/review
{"title": "The connection between different models is interesting, except for Bayesian net which is superficial and need to discuss more; MNIST results are interesting but more tasks need to be explored.", "rating": "5: Marginally below acceptance threshold", "review": "Strengths\n\n- interesting to explore the connection between ReLU DNN and simplified SFNN\n- small task (MNIST) is used to demonstrate the usefulness of the proposed training methods experimentally\n- the proposed, multi-stage training methods are simple to implement (despite lacking theoretical rigor)\n\n\nWeaknesses\n\n-no results are reported on real tasks with large training set\n\n-not clear exploration on the scalability of the learning methods when training data becomes larger\n\n-when the hidden layers become stochastic, the model shares uncertainty representation with deep Bayes networks or deep generative models (Deep Discriminative and Generative Models for Pattern Recognition , book chapter in \u201cPattern Recognition and Computer Vision\u201d, November 2015, Download PDF). Such connections should be discussed, especially wrt the use of uncertainty representation to benefit pattern recognition (i.e. supervised learning via Bayes rule) and to benefit the use of domain knowledge such as \u201cexplaining away\u201d.\n\n-would like to see connections with variational autoencoder models and training, which is also stochastic with hidden layers\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Making Stochastic Neural Networks from Deterministic Ones
["Kimin Lee", "Jaehyung Kim", "Song Chong", "Jinwoo Shin"]
It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.
["Deep learning", "Multi-modal learning", "Structured prediction"]
https://openreview.net/forum?id=B1akgy9xx
https://openreview.net/pdf?id=B1akgy9xx
https://openreview.net/forum?id=B1akgy9xx&noteId=ByXsQisNe
Under review as a conference paper at ICLR 2017MAKING STOCHASTIC NEURAL NETWORKSFROM DETERMINISTIC ONESKimin Lee, Jaehyung Kim, Song Chong, Jinwoo ShinSchool of Electrical EngineeringKorea Advanced Institute of Science Technology, Republic of Koreafkiminlee, jaehyungkim, jinwoos g@kaist.ac.kr, songchong@kaist.eduABSTRACTIt has been believed that stochastic feedforward neural networks (SFNN) haveseveral advantages beyond deterministic deep neural networks (DNN): they havemore expressive power allowing multi-modal mappings and regularize better dueto their stochastic nature. However, training SFNN is notoriously harder. In thispaper, we aim at developing efficient training methods for large-scale SFNN, inparticular using known architectures and pre-trained parameters of DNN. To thisend, we propose a new intermediate stochastic model, called Simplified-SFNN,which can be built upon any baseline DNN and approximates certain SFNN bysimplifying its upper latent units above stochastic ones. The main novelty of ourapproach is in establishing the connection between three models, i.e., DNN !Simplified-SFNN!SFNN, which naturally leads to an efficient training pro-cedure of the stochastic models utilizing pre-trained parameters of DNN. Us-ing several popular DNNs, we show how they can be effectively transferred tothe corresponding stochastic models for both multi-modal and classification taskson MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, ourstochastic model built from the wide residual network has 28 layers and 36 millionparameters, where the former consistently outperforms the latter for the classifica-tion tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.1 I NTRODUCTIONRecently, deterministic deep neural networks (DNN) have demonstrated state-of-the-art perfor-mance on many supervised tasks, e.g., speech recognition (Hinton et al., 2012a) and object recog-nition (Krizhevsky et al., 2012). One of the main components underlying these successes is on theefficient training methods for deeper and wider DNNs, which include backpropagation (Rumelhartet al., 1988), stochastic gradient descent (Robbins & Monro, 1951), dropout/dropconnect (Hintonet al., 2012b; Wan et al., 2013), batch/weight normalization (Ioffe & Szegedy, 2015; Salimans &Kingma, 2016), and various activation functions (Nair & Hinton, 2010; Gulcehre et al., 2016). Onthe other hand, stochastic feedforward neural networks (SFNN) (Neal, 1990) having random latentunits are often necessary in order to model complex stochastic natures in many real-world tasks, e.g.,structured prediction (Tang & Salakhutdinov, 2013), image generation (Goodfellow et al., 2014) andmemory networks (Zaremba & Sutskever, 2015). Furthermore, it has been believed that SFNN hasseveral advantages beyond DNN (Raiko et al., 2014): it has more expressive power for multi-modallearning and regularizes better for large-scale learning.Training large-scale SFNN is notoriously hard since backpropagation is not directly applicable. Cer-tain stochastic neural networks using continuous random units are known to be trainable efficientlyusing backpropagation under the variational techniques and the reparameterization tricks (Kingma& Welling, 2013). On the other hand, training SFNN having discrete, i.e., binary or multi-modal,random units is more difficult since intractable probabilistic inference is involved requiring too manyrandom samples. There have been several efforts developing efficient training methods for SFNNhaving binary random latent units (Neal, 1990; Saul et al., 1996; Tang & Salakhutdinov, 2013; Ben-gio et al., 2013; Raiko et al., 2014; Gu et al., 2015) (see Section 2.1 for more details). However,training SFNN is still significantly slower than doing DNN of the same architecture, e.g., most prior1Under review as a conference paper at ICLR 2017works on this line have considered a small number (at most 5 or so) of layers in SFNN. We aim forthe same goal, but our direction is orthogonal to them.Instead of training SFNN directly, we study whether pre-trained parameters of DNN (or easier mod-els) can be transferred to it, possibly with further fine-tuning of light cost. This approach can beattractive since one can utilize recent advances in DNN on its design and training. For example,one can design the network structure of SFNN following known specialized ones of DNN and usetheir pre-trained parameters. To this end, we first try transferring pre-trained parameters of DNNusing sigmoid activation functions to those of the corresponding SFNN directly. In our experiments,the heuristic reasonably works well. For multi-modal learning, SFNN under such a simple trans-formation outperforms DNN. Even for the MNIST classification, the former performs similarly asthe latter (see Section 2 for more details). However, it is questionable whether a similar strategyworks in general, particularly for other unbounded activation functions like ReLU (Nair & Hinton,2010) since SFNN has binary, i.e., bounded, random latent units. Moreover, it lost the regularizationbenefit of SFNN: it is rather believed that transferring parameters of stochastic models to DNN helpsits regularization, but the opposite direction is unlikely possible.To address the issues, we propose a special form of stochastic neural networks, named Simplified-SFNN, which intermediates between SFNN and DNN, having the following properties. First,Simplified-SFNN can be built upon any baseline DNN, possibly having unbounded activation func-tions. The most significant part of our approach lies in providing rigorous network knowledge trans-ferring (Chen et al., 2015) between Simplified-SFNN and DNN. In particular, we prove that param-eters of DNN can be transformed to those of the corresponding Simplified-SFNN while preservingthe performance, i.e., both represent the same mapping and features. Second, Simplified-SFNN ap-proximates certain SFNN, better than DNN, by simplifying its upper latent units above stochasticones using two different non-linear activation functions. Simplified-SFNN is much easier to trainthan SFNN while utilizing its stochastic nature for regularization.The above connection DNN !Simplified-SFNN!SFNN naturally suggests the following trainingprocedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune itscorresponding Simplified-SFNN initialized by the transformed DNN parameters. The pre-trainingstage accelerates the training task since DNN is faster to train than Simplified-SFNN. In addition,one can also utilize known DNN training techniques such as dropout and batch normalization forfine-tuning Simplified-SFNN. In our experiments, we train SFNN and Simplified-SFNN under theproposed strategy. They consistently outperform the corresponding DNN for both multi-modal andclassification tasks, where the former and the latter are for measuring the model expressive powerand the regularization effect, respectively. To the best of our knowledge, we are the first to confirmthat SFNN indeed regularizes better than DNN. We also construct the stochastic models followingthe same network structure of popular DNNs including Lenet-5 (LeCun et al., 1998), NIN (Linet al., 2014) and WRN (Zagoruyko & Komodakis, 2016). In particular, WRN (wide residual net-work) of 28 layers and 36 million parameters has shown the state-of-art performances on CIFAR-10and CIFAR-100 classification datasets, and our stochastic models built upon WRN outperform thedeterministic WRN on the datasets.Organization. In Section 2, we focus on DNNs having sigmoid and ReLU activation functions andstudy simple transformations of their parameters to those of SFNN. In Section 3, we consider DNNshaving general activation functions and describe more advanced transformations via introducing anew model, named Simplified-SFNN.2 S IMPLE TRANSFORMATION FROM DNN TOSFNN2.1 P RELIMINARIES FOR SFNNStochastic feedforward neural network (SFNN) is a hybrid model, which has both stochastic binaryand deterministic hidden units. We first introduce SFNN with one stochastic hidden layer (andwithout deterministic hidden layers) for simplicity. Throughout this paper, we commonly denotethe bias for unit iand the weight matrix of the `-th hidden layer by b`iandW`, respectively. Then,the stochastic hidden layer in SFNN is defined as a binary random vector with N1units, i.e., h122Under review as a conference paper at ICLR 2017f0;1gN1, drawn under the following distribution:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx=W1ix+b1i: (1)In the above, xis the input vector and (x) = 1=(1 +ex)is the sigmoid function. Our conditionaldistribution of the output yis defined as follows:P(yjx) =EP(h1jx)Pyjh1=EP(h1jx)NyjW2h1+b2; 2y;whereN()denotes the normal distribution with mean W2h1+b2and (fixed) variance 2y. There-fore,P(yjx)can express a very complex, multi-modal distribution since it is a mixture of expo-nentially many normal distributions. The multi-layer extension is straightforward via a combinationof stochastic and deterministic hidden layers, e.g., see Tang & Salakhutdinov (2013), Raiko et al.(2014). Furthermore, one can use any other output distributions as like DNN, e.g., softmax forclassification tasks.There are two computational issues for training SFNN: computing expectations with respect tostochastic units in forward pass and computing gradients in backward pass. One can notice that bothare computationally intractable since they require summations over exponentially many configura-tions of all stochastic units. First, in order to handle the issue in forward pass, one can use the follow-ing Monte Carlo approximation for estimating the expectation: P(yjx)w1MMPm=1P(yjh(m));where h(m)Ph1jxandMis the number of samples. This random estimator is unbiased andhas relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on thedimensionality of h1and one can draw samples from the exact distribution. Next, in order to handlethe issue in backward pass, Neal (1990) proposed a Gibbs sampling, but it is known that it oftenmixes poorly. Saul et al. (1996) proposed a variational learning based on the mean-field approxi-mation, but it has additional parameters making the variational lower bound looser. More recently,several other techniques have been proposed including unbiased estimators of the variational boundusing importance sampling (Tang & Salakhutdinov, 2013; Raiko et al., 2014) and biased/unbiasedestimators of the gradient for approximating backpropagation (Bengio et al., 2013; Raiko et al.,2014; Gu et al., 2015).2.2 S IMPLE TRANSFORMATION FROM SIGMOID -DNN AND RELU-DNN TOSFNNDespite the recent advances, training SFNN is still very slow compared to DNN due to the samplingprocedures: in particular, it is notoriously hard to train SFNN when the network structure is deeperand wider. In order to handle these issues, we consider the following approximation:P(yjx) =EP(h1jx)NyjW2h1+b2; 2ywNyjEP(h1jx)W2h1+b2; 2y=NyjW2W1x+b1+b2; 2y:(2)Note that the above approximation corresponds to replacing stochastic units by deterministic onessuch that their hidden activation values are same as marginal distributions of stochastic units, i.e.,SFNN can be approximated by DNN using sigmoid activation functions, say sigmoid-DNN. Whenthere exist more latent layers above the stochastic one, one has to apply similar approximations toall of them, i.e., exchanging the orders of expectations and non-linear functions, for making DNNand SFNN are equivalent. Therefore, instead of training SFNN directly, one can try transferring pre-trained parameters of sigmoid-DNN to those of the corresponding SFNN directly: train sigmoid-DNN instead of SFNN, and replace deterministic units by stochastic ones for the inference purpose.Although such a strategy looks somewhat ‘rude’, it was often observed in the literature that it rea-sonably works well for SFNN (Raiko et al., 2014) and we also evaluate it as reported in Table 1. Wealso note that similar approximations appear in the context of dropout: it trains a stochastic modelaveraging exponentially many DNNs sharing parameters, but also approximates a single DNN well.Now we investigate a similar transformation in the case when DNN uses the unbounded ReLUactivation function, say ReLU-DNN. Many recent deep networks are of ReLU-DNN type due tothe gradient vanishing problem, and their pre-trained parameters are often available. Although itis straightforward to build SFNN from sigmoid-DNN, it is less clear in this case since ReLU is3Under review as a conference paper at ICLR 2017x0 0.2 0.4 0.6 0.8 1y00.511.5Training dataSamples from sigmoid-DNN(a)x0 0.2 0.4 0.6 0.8 1y00.511.5Training dataSamples from SFNN (sigmoid activation) (b)Figure 1: The generated samples from (a) sigmoid-DNN and (b) SFNN which uses same parameterstrained by sigmoid-DNN. One can note that SFNN can model the multiple modes in outupt space yaroundx= 0:4.Inference Model Network StructureMNIST Classification Multi-modal LearningTraining NLL Training Error ( %) Test Error ( %) Test NLLsigmoid-DNN 2 hidden layers 0 0 1.54 5.290SFNN 2 hidden layers 0 0 1.56 1.564sigmoid-DNN 3 hidden layers 0.002 0.03 1.84 4.880SFNN 3 hidden layers 0.022 0.04 1.81 0.575sigmoid-DNN 4 hidden layers 0 0.01 1.74 4.850SFNN 4 hidden layers 0.003 0.03 1.73 0.392ReLU-DNN 2 hidden layers 0.005 0.04 1.49 7.492SFNN 2 hidden layers 0.819 4.50 5.73 2.678ReLU-DNN 3 hidden layers 0 0 1.43 7.526SFNN 3 hidden layers 1.174 16.14 17.83 4.468ReLU-DNN 4 hidden layers 0 0 1.49 7.572SFNN 4 hidden layers 1.213 13.13 14.64 1.470Table 1: The performance of simple parameter transformations from DNN to SFNN on the MNISTand synthetic datasets, where each layer of neural networks contains 800 and 50 hidden units fortwo datasets, respectively. For all experiments, the only first hidden layer of DNN is replaced bystochastic one. We report negative log-likelihood (NLL) and classification error rates.unbounded. To handle this issue, we redefine the stochastic latent units of SFNN:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx= minfW1ix+b1i;1:(3)In the above, f(x) = maxfx;0gis the ReLU activation function and is some hyper-parameter. Asimple transformation can be defined similarly as the case of sigmoid-DNN via replacing determin-istic units by stochastic ones. However, to preserve the parameter information of ReLU-DNN, onehas to choose such thatfW1ix+b1i1and rescale upper parameters W2as follows:1 maxi;xfcW1ix+bb1i;W1;b1 cW1;bb1;W2;b2 cW2=;bb2:(4)Then, applying similar approximations as in (2), i.e., exchanging the orders of expectations andnon-linear functions, one can observe that ReLU-DNN and SFNN are equivalent.We evaluate the performance of the simple transformations from DNN to SFNN on the MNISTdataset (LeCun et al., 1998) and the synthetic dataset (Bishop, 1994), where the former and the latterare popular datasets used for a classification task and a multi-modal (i.e., one-to-many mappings)prediction learning, respectively. In all experiments reported in this paper, we commonly use thesoftmax and Gaussian with standard deviation of y= 0:05are used for the output probabilityon classification and regression tasks, respectively. The only first hidden layer of DNN is replacedby stochastic one, and we use 500 samples for estimating the expectations in the SFNN inference.As reported in Table 1, we observe that the simple transformation often works well for both tasks:the SFNN and sigmoid-DNN inferences (using same parameters trained by sigmoid-DNN) performsimilarly for the classification task and the former significantly outperforms for the latter for the4Under review as a conference paper at ICLR 2017multi-modal task (also see Figure 1). It might suggest some possibilities that the expensive SFNNtraining might not be not necessary, depending on the targeted learning quality. However, in case ofReLU, SFNN performs much worse than ReLU-DNN for the MNIST classification task under theparameter transformation.3 T RANSFORMATION FROM DNN TOSFNN VIASIMPLIFIED -SFNNIn this section, we propose an advanced method to utilize the pre-trained parameters of DNN fortraining SFNN. As shown in the previous section, simple parameter transformations from DNN toSFNN are not clear to work in general, in particular for activation functions other than sigmoid.Moreover, training DNN does not utilize the stochastic regularizing effect, which is an importantbenefit of SFNN. To address the issues, we design an intermediate model, called Simplified-SFNN.The proposed model is a special form of stochastic neural networks, which approximates certainSFNN by simplifying its upper latent units above stochastic ones. Then, we establish more rigorousconnections between three models: DNN !Simplified-SFNN!SFNN, which leads to an effi-cient training procedure of the stochastic models utilizing pre-trained parameters of DNN. In ourexperiments, we evaluate the strategy for various tasks and popular DNN architectures.3.1 S IMPLIFIED -SFNN OF TWO HIDDEN LAYERS AND NON -NEGATIVE ACTIVATIONFUNCTIONSFor clarity of presentation, we first introduce Simplified-SFNN with two hidden layers and non-negative activation functions, where its extensions to multiple layers and general activation functionsare presented in Appendix B. We also remark that we primarily describe fully-connected Simplified-SFNNs, but their convolutional versions can also be naturally defined. In Simplified-SFNN of twohidden layers, we assume that the first and second hidden layers consist of stochastic binary hiddenunits and deterministic ones, respectively. As like (3), the first layer is defined as a binary randomvector withN1units, i.e., h12f0;1gN1, drawn under the following distribution:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx= min1fW1ix+b1i;1:(5)where xis the input vector, 1>0is a hyper-parameter for the first layer, and f:R!R+is somenon-negative non-linear activation function with jf0(x)j1for allx2R, e.g., ReLU and sigmoidactivation functions. Now the second layer is defined as the following deterministic vector with N2units, i.e., h2(x)2RN2:h2(x) =f2EP(h1jx)sW2jh1+b2js(0):8j2N2; (6)where2>0is a hyper-parameter for the second layer and s:R!Ris a differentiable functionwithjs00(x)j1for allx2R, e.g., sigmoid and tanh functions. In our experiments, we use thesigmoid function for s(x). Here, one can note that the proposed model also has the same computa-tional issues with SFNN in forward and backward passes due to the complex expectation. One cantrain Simplified-SFNN similarly as SFNN: we use Monte Carlo approximation for estimating theexpectation and the (biased) estimator of the gradient for approximating backpropagation inspiredby Raiko et al. (2014) (more detailed explanation is presented in Appendix A).We are interested in transferring parameters of DNN to Simplified-SFNN to utilize the trainingbenefits of DNN since the former is much faster to train than the latter. To this end, we consider thefollowing DNN of which `-th hidden layer is deterministic and defined as follows:bh`(x) =hbh`i(x) =fcW`ibh`1(x) +bb`i:i2N`i; (7)wherebh0(x) =x. As stated in the following theorem, we establish a rigorous way how to initializeparameters of Simplified-SFNN in order to transfer the knowledge stored in DNN.Theorem 1 Assume that both DNN and Simplified-SFNN with two hidden layers have same networkstructure with non-negative activation function f. Given parameters fcW`;bb`:`= 1;2gof DNNand input dataset D, choose those of Simplified-SFNN as follows:1;W1;b1 11;cW1;bb1;2;W2;b2 21s0(0);12cW2;112bb2;(8)5Under review as a conference paper at ICLR 2017InputLayer 1OutputLayer 2InputLayer 1OutputLayer 2: Stochastic layer: Stochasticity: Deterministic layer(a)Epoch0 50 100 150 200 250Test Error [%]11.522.533.5Baseline ReLU-DNNReLU-DNN* trained by ReLU-DNN*ReLU-DNN* trained by Simplified-SFNN (b)The value of γ20 1 2 3 4 5 10 50 100Knowledge Transferring Loss05101520253035# of samples = 1000 (c)Figure 2: (a) Simplified-SFNN (top) and SFNN (bottom). (b) For first 200 epochs, we train abaseline ReLU-DNN. Then, we train simplified-SFNN initialized by the DNN parameters undertransformation (8) with 2= 50 . We observe that training ReLU-DNNdirectly does not reducethe test error even when network knowledge transferring still holds between the baseline ReLU-DNN and the corresponding ReLU-DNN. (c) As the value of 2increases, knowledge transferringloss measured as1jDj1N`PxPih`i(x)bh`i(x)is decreasing.where1= maxi;x2DfcW1ix+bb1iand2>0is any positive constant. Then, it follows thath2j(x)bh2j(x)1PicW2ij+bb2j1122s0(0)2;8j;x2D:The proof of the above theorem is presented in Appendix D.1. Our proof is built upon thefirst-order Taylor expansion of non-linear function s(x). Theorem 1 implies that one can makeSimplified-SFNN represent the function values of DNN with bounded errors using a linear trans-formation. Furthermore, the errors can be made arbitrarily small by choosing large 2, i.e.,lim2!1h2j(x)bh2j(x)= 0;8j;x2D:Figure 2(c) shows that knowledge transferring loss de-creases as2increases on MNIST classification. Based on this, we choose 2= 50 commonly forall experiments.3.2 W HYSIMPLIFIED -SFNN ?Given a Simplified-SFNN model, the corresponding SFNN can be naturally defined by taking out theexpectation in (6). As illustrated in Figure 2(a), the main difference between SFNN and Simplified-SFNN is that the randomness of the stochastic layer propagates only to its upper layer in the latter,i.e., the randomness of h1is averaged out at its upper units h2and does not propagate to h3or outputy. Hence, Simplified-SFNN is no longer a Bayesian network. This makes training Simplified-SFNNmuch easier than SFNN since random samples are not required at some layers1and consequentlythe quality of gradient estimations can also be improved, in particular for unbounded activationfunctions. Furthermore, one can use the same approximation procedure (2) to see that Simplified-SFNN approximates SFNN. However, since Simplified-SFNN still maintains binary random units,it uses approximation steps later, in comparison with DNN. In summary, Simplified-SFNN is anintermediate model between DNN and SFNN, i.e., DNN !Simplified-SFNN!SFNN.The above connection naturally suggests the following training procedure for both SFNN andSimplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNNinitialized by the transformed DNN parameters. Finally, the fine-tuned parameters can be used forSFNN as well. We evaluate the strategy for the MNIST classification, which is reported in Table 2(see Appendix C for more detailed experiment setups). We found that SFNN under the two-stagetraining always performs better than SFNN under a simple transformation (4) from ReLU-DNN.1For example, if one replaces the first feature maps in the fifth residual unit of Pre-ResNet having 164layers (He et al., 2016) by stochastic ones, then the corresponding DNN, Simplified-SFNN and SFNN took 1mins 35 secs, 2 mins 52 secs and 16 mins 26 secs per each training epoch, respectively, on our machine withone Intel CPU (Core i7-5820K 6-Core@3.3GHz) and one NVIDIA GPU (GTX Titan X, 3072 CUDA cores).Here, we trained both stochastic models using the biased estimator (Raiko et al., 2014) with 10 random sampleson CIFAR-10 dataset.6Under review as a conference paper at ICLR 2017Inference Model Training Model Network Structure without BN & DO with BN with DOsigmoid-DNN sigmoid-DNN 2 hidden layers 1.54 1.57 1.25SFNN sigmoid-DNN 2 hidden layers 1.56 2.23 1.27Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.5 1.11sigmoid-DNNfine-tuned by Simplified-SFNN 2 hidden layers 1.48 (0.06) 1.48 (0.09) 1.14 (0.11)SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.57 1.11ReLU-DNN ReLU-DNN 2 hidden layers 1.49 1.25 1.12SFNN ReLU-DNN 2 hidden layers 5.73 3.47 1.74Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.41 1.17 1.06ReLU-DNNfine-tuned by Simplified-SFNN 2 hidden layers 1.32 (0.17) 1.16 (0.09) 1.05 (0.07)SFNN fine-tuned by Simplified-SFNN 2 hidden layers 2.63 1.34 1.51ReLU-DNN ReLU-DNN 3 hidden layers 1.43 1.34 1.24SFNN ReLU-DNN 3 hidden layers 17.83 4.15 1.49Simplified-SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.28 1.25 1.04ReLU-DNNfine-tuned by Simplified-SFNN 3 hidden layers 1.27 (0.16) 1.24 (0.1) 1.03 (0.21)SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.56 1.82 1.16ReLU-DNN ReLU-DNN 4 hidden layers 1.49 1.34 1.30SFNN ReLU-DNN 4 hidden layers 14.64 3.85 2.17Simplified-SFNN fine-tuned by Simplified-SFNN 4 hidden layers 1.32 1.32 1.25ReLU-DNNfine-tuned by Simplified-SFNN 4 hidden layers 1.29 (0.2) 1.29 (0.05) 1.25 (0.05)SFNN fine-tuned by Simplified-SFNN 4 hidden layers 3.44 1.89 1.56Table 2: Classification test error rates [ %] on MNIST, where each layer of neural networks contains800 hidden units. All Simplified-SFNNs are constructed by replacing the first hidden layer of a base-line DNN with stochastic hidden layer. We also consider training DNN and fine-tuning Simplified-SFNN using batch normalization (BN) and dropout (DO). The performance improvements beyondbaseline DNN (due to fine-tuning DNN parameters under Simplified-SFNN) are calculated in thebracket.More interestingly, Simplified-SFNN consistently outperforms its baseline DNN due to the stochas-tic regularizing effect, even when we train both models using dropout (Hinton et al., 2012b) andbatch normalization (Ioffe & Szegedy, 2015). In order to confirm the regularization effects, one canagain approximate a trained Simplified-SFNN by a new deterministic DNN which we call DNNand is different from its baseline DNN under the following approximation at upper latent units abovebinary random units:EP(h`jx)sW`+1jh`wsEP(h`jx)W`+1jh`=s XiW`+1ijPh`i= 1jx!:(9)We found that DNNusing fined-tuned parameters of Simplified-SFNN also outperforms the base-line DNN as shown in Table 2 and Figure 2(b).3.3 E XPERIMENTAL RESULTS ON MULTI -MODAL LEARNING AND CONVOLUTIONALNETWORKSWe present several experimental results for both multi-modal and classification tasks on MNIST(LeCun et al., 1998), Toronto Face Database (TFD) (Susskind et al., 2010), CIFAR-10, CIFAR-100(Krizhevsky & Hinton, 2009) and SVHN (Netzer et al., 2011). Here, we present some key resultsdue to the space constraints and more detailed explanations for our experiment setups are presentedin Appendix C.We first verify that it is possible to learn one-to-many mapping via Simplified-SFNN on the TFDand MNIST datasets, where the former and the latter are used to predict multiple facial expressionsfrom the mean of face images per individual and the lower half of the MNIST digit given the upperhalf, respectively. We remark that both tasks are commonly performed in recent other works totest the multi-modal learning using SFNN (Raiko et al., 2014; Gu et al., 2015). In all experiments,we first train a baseline DNN, and the trained parameters of DNN are used for further fine-tuningthose of Simplified-SFNN. As shown in Table 3 and Figure 3, stochastic models outperform theirbaseline DNN, and generate different digits for the case of ambiguous inputs (while DNN doesnot). We also evaluate the regularization effect of Simplified-SFNN for the classification tasks onCIFAR-10, CIFAR-100 and SVHN. Table 4 reports the classification error rates using convolutionalneural networks such as Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko& Komodakis, 2016). Due to the regularization effects, Simplified-SFNNs consistently outperform7Under review as a conference paper at ICLR 2017Inference Model Training ModelMNIST-half TFD2 hidden layers 3 hidden layers 2 hidden layers 3 hidden layerssigmoid-DNN sigmoid-DNN 1.409 1.720 -0.064 0.005SFNN sigmoid-DNN 0.644 1.076 -0.461 -0.401Simplified-SFNN fine-tuned by Simplified-SFNN 1.474 1.757 -0.071 -0.028SFNN fine-tuned by Simplified-SFNN 0.619 0.991 -0.509 -0.423ReLU-DNN ReLU-DNN 1.747 1.741 1.271 1.232SFNN ReLU-DNN -1.019 -1.021 0.823 1.121Simplified-SFNN fine-tuned by Simplified-SFNN 2.122 2.226 0.175 0.343SFNN fine-tuned by Simplified-SFNN -1.290 -1.061 -0.380 -0.193Table 3: Test negative log-likelihood (NLL) on MNIST-half and TFD datasets, where each layer ofneural networks contains 200 hidden units. All Simplified-SFNNs are constructed by replacing thefirst hidden layer of a baseline DNN with stochastic hidden layer.Figure 3: Generated samples for predicting the lower half of the MNIST digit given the upper half.The original digits and the corresponding inputs (first). The generated samples from sigmoid-DNN(second), SFNN under the simple transformation (third), and SFNN fine-tuned by Simplified-SFNN(forth). We observed that SFNN fine-tuned by Simplified-SFNN can generate more various samplesfrom same inputs, e.g., 3 and 8, better than SFNN under the simple transformation.InferencemodelTraining Model CIFAR-10 CIFAR-100 SVHNLenet-5 Lenet-5 37.67 77.26 11.18Lenet-5Simplified-SFNN 33.58 73.02 9.88NIN NIN 9.51 32.66 3.21NINSimplified-SFNN 9.33 30.81 3.01WRN WRN 4.22 (4.39)y20.30 (20.04)y 3.25yWRN Simplified-SFNN(one stochastic layer)4.21y 19.98y 3.09yWRN Simplified-SFNN(two stochastic layers)4.14y 19.72y 3.06yTable 4: Test error rates [ %] on CIFAR-10, CIFAR-100 andSVHN. The error rates for WRN are from our experiments,where original ones reported in (Zagoruyko & Komodakis,2016) are in the brackets. Results with yare obtained usingthe horizontal flipping and random cropping augmentation.Epoch0 50 100 150 200Test Error [%]19.52020.52121.522WRN* trained by Simplified-SFNN (one stochastic layer)WRN* trained by Simplified-SFNN (two stochastic layers)Bseline WRNFigure 4: Test errors of WRNper eachtraining epoch on CIFAR-100.their baseline DNNs. For example, WRNoutperforms WRN by 0.08 %on CIFAR-10 and 0.58 %onCIFAR-100. We expect that if one introduces more stochastic layers, the error would be decreasedmore (see Figure 4), but it increases the fine-tuning time-complexity of Simplified-SFNN.4 C ONCLUSIONIn order to develop an efficient training method for large-scale SFNN, this paper proposes a newintermediate stochastic model, called Simplified-SFNN. We establish the connection between threemodels, i.e., DNN !Simplified-SFNN!SFNN, which naturally leads to an efficient trainingprocedure of the stochastic models utilizing pre-trained parameters of DNN. This connection natu-rally leads an efficient training procedure of the stochastic models utilizing pre-trained parametersand architectures of DNN. We believe that our work brings a new important direction for trainingstochastic neural networks, which should be of broader interest in many related applications.8Under review as a conference paper at ICLR 2017REFERENCESYoshua Bengio, Nicholas L ́eonard, and Aaron Courville. Estimating or propagating gradients through stochas-tic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.Christopher M Bishop. Mixture density networks. 1994.Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer.arXiv preprint arXiv:1511.05641 , 2015.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process-ing Systems (NIPS) , 2014.Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochas-tic neural networks. arXiv preprint arXiv:1511.05176 , 2015.Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. arXivpreprint arXiv:1603.00391 , 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXivpreprint arXiv:1603.05027 , 2016.Geoffrey E Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Se-nior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modelingin speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine , 2012a.Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improvingneural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012b.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. International Conference on Machine Learning (ICML) , 2015.Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013.Alex Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images. Master’sthesis, Department of Computer Science, University of Toronto , 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neuralnetworks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 1998.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. International Conference on Learning Repre-sentations (ICLR) , 2014.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Interna-tional Conference on Machine Learning (ICML) , 2010.Radford M Neal. Learning stochastic feedforward networks. Department of Computer Science, University ofToronto , 1990.Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits innatural images with unsupervised feature learning. NIPS Workshop on Deep Learning and UnsupervisedFeature Learning , 2011.Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochasticfeedforward neural networks. arXiv preprint arXiv:1406.2989 , 2014.Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics ,1951.David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by errorpropagation. Technical report, MIT Press, 1988.9Under review as a conference paper at ICLR 2017Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train-ing of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks.Journal of artificial intelligence research , 1996.Josh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department ofComputer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep , 2010.Yichuan Tang and Ruslan R Salakhutdinov. Learning stochastic feedforward neural networks. In Advances inNeural Information Processing Systems (NIPS) , 2013.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks usingdropconnect. In International Conference on Machine Learning (ICML) , 2013.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146 , 2016.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprintarXiv:1505.00521 , 2015.A T RAINING SIMPLIFIED -SFNNThe parameters of Simplified-SFNN can be learned using a variant of the backpropagation algorithm(Rumelhart et al., 1988) in a similar manner to DNN. However, in contrast to DNN, there are twocomputational issues for simplified-SFNN: computing expectations with respect to stochastic unitsin forward pass and computing gradients in back pass. One can notice that both are intractable sincethey require summations over all possible configurations of all stochastic units. First, in order tohandle the issue in forward pass, we use the following Monte Carlo approximation for estimatingthe expectation:EP(h1jx)sW2jh1+b2jw1MMXm=1sW2jh(m)+b2j; h(m)Ph1jx;whereMis the number of samples. This random estimator is unbiased and has relatively lowvariance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality ofh1and one can draw samples from the exact distribution. Next, in order to handle the issue in backpass, we use the following approximation inspired by (Raiko et al., 2014):@@W2jEP(h1jx)sW2jh1+b2jw1MXm@@W2jsW2jh(m)+b2j;@@W1iEP(h1jx)sW2jh1+b2jwW2ijMXms0W2jh(m)+b2j@@W1iPh1i= 1jx;where h(m)Ph1jxandMis the number of samples. In our experiments, we commonlychooseM= 20 .B E XTENSIONS OF SIMPLIFIED -SFNNIn this section, we describe how the network knowledge transferring between Simplified-SFNN andDNN, i.e., Theorem 1, generalizes to multiple layers and general activation functions.B.1 E XTENSION TO MULTIPLE LAYERSA deeper Simplified-SFNN with Lhidden layers can be defined similarly as the case of L= 2. Wealso establish network knowledge transferring between Simplified-SFNN and DNN with Lhiddenlayers as stated in the following theorem. Here, we assume that stochastic layers are not consecutivefor simpler presentation, but the theorem is generalizable for consecutive stochastic layers.10Under review as a conference paper at ICLR 2017Theorem 2 Assume that both DNN and Simplified-SFNN with Lhidden layers have same networkstructure with non-negative activation function f. Given parameters fcW`;bb`:`= 1;:::;LgofDNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them foreach`-th stochastic layer and its upper layer as follows:` 1`;`+1;W`+1;b`+1 ``+1s0(0);cW`+1`+1;bb`+1``+1!;where`= maxi;x2DfcW`ih`1(x) +bb`iand`+1is any positive constant. Then, it follows thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:The above theorem again implies that it is possible to transfer knowledge from DNN to Simplified-SFNN by choosing large l+1. The proof of Theorem 2 is similar to that of Theorem 1 and given inAppendix D.2.B.2 E XTENSION TO GENERAL ACTIVATION FUNCTIONSIn this section, we describe an extended version of Simplified-SFNN which can utilize any activationfunction. To this end, we modify the definitions of stochastic layers and their upper layers byintroducing certain additional terms. If the `-th hidden layer is stochastic, then we slightly modifythe original definition (5) as follows:Ph`jx=N`Yi=1Ph`ijxwithPh`i= 1jx= min`fW1ix+b1i+12;1;wheref:R!Ris a non-linear (possibly, negative) activation function with jf0(x)j1for allx2R. In addition, we re-define its upper layer as follows:h`+1(x) ="f `+1 EP(h`jx)sW`+1jh`+b`+1js(0)s0(0)2XiW`+1ij!!:8j#;where h0(x) =xands:R!Ris a differentiable function with js00(x)j1for allx2R.Under this general Simplified-SFNN model, we also show that transferring network knowledge fromDNN to Simplified-SFNN is possible as stated in the following theorem. Here, we again assumethat stochastic layers are not consecutive for simpler presentation.Theorem 3 Assume that both DNN and Simplified-SFNN with Lhidden layers have same networkstructure with non-linear activation function f. Given parameters fcW`;bb`:`= 1;:::;Lgof DNNand input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each`-th stochastic layer and its upper layer as follows:` 12`;`+1;W`+1;b`+1 2``+1s0(0);cW`+1`+1;bb`+12``+1!;where`= maxi;x2DfcW`ih`1(x) +bb`i, and`+1is any positive constant. Then, it follows thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:We omit the proof of the above theorem since it is somewhat direct adaptation of that of Theorem 2.C E XPERIMENTAL SETUPSIn this section, we describe detailed explanation about all the experiments described in Section 3.In all experiments, the softmax and Gaussian with the standard deviation of 0.05 are used as theoutput probability for the classification task and the multi-modal prediction, respectively. The losswas minimized using ADAM learning rule (Kingma & Ba, 2014) with a mini-batch size of 128. Weused an exponentially decaying learning rate.11Under review as a conference paper at ICLR 2017C.1 C LASSIFICATION ON MNISTThe MNIST dataset consists of 2828pixel greyscale images, each containing a digit 0 to 9 with60,000 training and 10,000 test images. For this experiment, we do not use any data augmentationor pre-processing. Hyper-parameters are tuned on the validation set consisting of the last 10,000training images. All Simplified-SFNNs are constructed by replacing the first hidden layer of abaseline DNN with stochastic hidden layer. As described in Section 3.2, we train Simplified-SFNNsunder the two-stage procedure: first train a baseline DNN for first 200 epochs, and the trainedparameters of DNN are used for initializing those of Simplified-SFNN. For 50 epochs, we trainsimplified-SFNN. We choose the hyper-parameter 2= 50 in the parameter transformation. AllSimplified-SFNNs are trained with M= 20 samples at each epoch, and in the test, we use 500samples.C.2 M ULTI -MODAL REGRESSION ON TFD AND MNISTThe Toronto Face Database (TFD) (Susskind et al., 2010) dataset consists of 4848pixel greyscaleimages, each containing a face image of 900 individuals with 7 different expressions. Similar to(Raiko et al., 2014), we use 124 individuals with at least 10 facial expressions as data. We randomlychoose 100 individuals with 1403 images for training and the remaining 24 individuals with 326images for the test. We take the mean of face images per individual as the input and set the outputas the different expressions of the same individual. The MNIST dataset consists of 2828pixelgreyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. Forthis experiments, each pixel of every digit images is binarized using its grey-scale value. We take theupper half of the MNIST digit as the input and set the output as the lower half of it. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hiddenlayer. We train Simplified-SFNNs with M= 20 samples at each epoch, and in the test, we use 500samples. We use 200 hidden units for each layer of neural networks in two experiments. Learningrate is chosen from f0.005 , 0.002, 0.001, ... , 0.0001 g, and the best result is reported for both tasks.C.3 C LASSIFICATION ON CIFAR-10, CIFAR-100 AND SVHNThe CIFAR-10 and CIFAR-100 datasets consist of 50,000 training and 10,000 test images. TheSVHN dataset consists of 73,257 training and 26,032 test images.2We pre-process the data usingglobal contrast normalization and ZCA whitening. For these datasets, we design a convolutionalversion of Simplified-SFNN. In a similar manner to the case of fully-connected networks, one candefine a stochastic convolution layer, which considers the input feature map as a binary random ma-trix and generates the output feature map as defined in (6). All Simplified-SFNNs are constructed byreplacing a hidden feature map of a baseline models, i.e., Lenet-5, NIN and WRN, with stochasticone as shown in Figure 5(d). We use WRN with 16 and 28 layers for SVHN and CIFAR datasets, re-spectively, since they showed state-of-the-art performance as reported by Zagoruyko & Komodakis(2016). In case of WRN, we introduce up to two stochastic convolution layers.For 100 epochs, wefirst train baseline models, i.e., Lenet-5, NIN and WRN, and trained parameters are used for ini-tializing those of Simplified-SFNNs. All Simplified-SFNNs are trained with M= 5 samples andthe test error is only measured by the approximation (9). The test errors of baseline models aremeasured after training them for 200 epochs similar to Zagoruyko & Komodakis (2016).D P ROOFS OF THEOREMSD.1 P ROOF OF THEOREM 1First consider the first hidden layer, i.e., stochastic layer. Let 1= maxi;x2DfcW1ix+bb1ibethe maximum value of hidden units in DNN. If we initialize the parameters1;W1;b1 11;cW1;bb1, then the marginal distribution of each hidden unit ibecomesPh1i= 1jx;W1;b1=min1fcW1ix+bb1i;1=11fcW1ix+bb1i;8i;x2D:(10)2We do not use the extra SVHN dataset for training.12Under review as a conference paper at ICLR 2017[Convolution (Conv.)] [Fully -connected] [Fully -connected] [Fully -connected] [Max pool] [Stochastic (Stoc .) Conv. ] [Max pool]6 feature maps (f. maps)Input Output6 Stochastic (Stoc .) f. maps84 units16 f. maps16 f. maps120 unitsA(a)[Conv.] [Conv.] [Conv.] [Max pool] [Conv.] [Conv.] [Conv.] [Stoc. Conv. ][Avg pool] [Avg pool]160f. maps96f. maps192f. maps192f. maps192Stoc. f. maps10f. mapsOutput Input192f. maps96f. maps192f. maps192f. mapsA[Conv.] [Conv.]192f. maps(b)InputA16f. maps[Conv.]64∗2uu−1f. maps[Conv.]64∗2uu−1,f. maps64∗2uu−1f. maps64∗2uu−1256f. mapsOutputStoc. f. maps[Conv.] [Conv.][Stoc. Conv. ][Avg pool] [Fully -connected][Conv.]eeeeeeee×3(uu=1,2,3)iiii(vv′≤3&uu=3)64∗2uu−1Stoc. f. maps[Conv.]eeeeeeeeiiii(vv′≤2&uu=3)[Stoc. Conv. ](c)InputA16f. maps[Conv.]160∗2uu−1f. maps160∗2uu−1f. maps[Conv.]160∗2uu−1,f. maps160∗2uu−1f. maps160∗2uu−1640f. mapsOutputStoc. f. maps[Conv.] [Conv.] [Conv.][Stoc. Conv. ][Avg pool] [Fully -connected][Conv.]eeeeeeee×3(vv=1,2,3)×3(uu=1,2,3)iiii(vv≥vv′&uu=3)(d)Figure 5: The overall structures of (a) Lenet-5, (b) NIN, (c) WRN with 16 layers, and (d) WRN with28 layers. The red feature maps correspond to the stochastic ones. In case of WRN, we introduceone (v0= 3) and two (v0= 2) stochastic feature maps.Next consider the second hidden layer. From Taylor’s theorem, there exists a value zbetween 0andxsuch thats(x) =s(0) +s0(0)x+R(x), whereR(x) =s00(z)x22!. Since we consider a binaryrandom vector, i.e., h12f0;1gN1, one can writeEP(h1jx)sjh1=Xh1s(0) +s0(0)jh1+Rjh1Ph1jx=s(0) +s0(0) XiW2ijP(h1i= 1jx) +b2j!+EP(h1jx)R(j(h1));wherejh1:=W2jh1+b2jis the incoming signal. From (6) and (10), for every hidden unit j, itfollows thath2jx;W2;b2=f 2 s0(0) 11XiW2ijbh1i(x) +b2j!+EP(h1jx)Rjh1!!:13Under review as a conference paper at ICLR 2017Since we assume that jf0(x)j1, the following inequality holds:h2j(x;W2;b2)f 2s0(0) 11XiW2ijbh1i(x) +b2j!!2EP(h1jx)R(j(h1))22EP(h1jx)hW2jh1+b2j2i;where we usejs00(z)j<1for the last inequality. Therefore, it follows thath2jx;W2;b2bh2jx;cW2;bb21PicW2ij+bb2j1122s0(0)2;8j;since we set2;W2;b2 21s0(0);cW22;112bb2. This completes the proof of Theorem 1.D.2 P ROOF OF THEOREM 2For the proof of Theorem 2, we first state the two key lemmas on error propagation in Simplified-SFNN.Lemma 4 Assume that there exists some positive constant Bsuch thath`1i(x)bh`1i(x)B;8i;x2D;and the`-th hidden layer of NCSFNN is standard deterministic layer as defined in (7). Given pa-rametersfcW`;bb`gof DNN, choose same ones for NCSFNN. Then, the following inequality holds:h`j(x)bh`j(x)BN`1cW`max;8j;x2D:wherecW`max= maxijcW`ij.Proof. See Appendix D.3. Lemma 5 Assume that there exists some positive constant Bsuch thath`1i(x)bh`1i(x)B;8i;x2D;and the`-th hidden layer of simplified-SFNN is stochastic layer. Given parametersfcW`;cW`+1;bb`;bb`+1gof DNN, choose those of Simplified-SFNN as follows:` 1`;`+1;W`+1;b`+1 ``+1s0(0);cW`+1`+1;bb`+1``+1!;where`= maxj;x2DfcW`jh`1(x) +bb`jand`+1is any positive constant. Then, it follows thath`+1k(x)bh`+1k(x)BN`1N`cW`maxcW`+1max+`N`cW`+1max+bb`+1max1`22s0(0)`+1;8k;x2D;wherebb`max= maxjbb`jandcW`max= maxijcW`ij.Proof. See Appendix D.4. Assume that `-th layer is first stochastic hidden layer in Simplified-SFNN. Then, from Theorem 1,we haveh`+1j(x)bh`+1j(x)`N`cW`+1max+bb`+1max1`22s0(0)`+1;8j;x2D: (11)14Under review as a conference paper at ICLR 2017According to Lemma 4 and 5, the final error generated by the right hand side of (11) is bounded by``N`cW`+1max+bb`+1max1`22s0(0)`+1; (12)where`=LQ`0=l+2N`01cW`0max:One can note that every error generated by each stochastic layeris bounded by (12). Therefore, it follows thathLj(x)bhLj(x)X`:stochastic hidden layer0B@``N`cW`+1max+bb`+1max1`22s0(0)`+11CA;8j;x2D:From above inequality, we can conclude thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:This completes the proof of Theorem 2.D.3 P ROOF OF LEMMA 4From assumption, there exists some constant isuch thatjij<B andh`1i(x) =bh`1i(x) +i;8i;x:By definition of standard deterministic layer, it follows thath`j(x) =f XicW`ijh`1i(x) +bb`1j!=f XicW`ijbh`1i(x) +XicW`iji+bb`j!:Since we assume that jf0(x)j1, one can conclude thath`j(x)f XicW`ijbh`1i(x) +bb`j!XicW`ijiBXicW`ijBN`1cW`max:This completes the proof of Lemma 4.D.4 P ROOF OF LEMMA 5From assumption, there exists some constant `1isuch that`1i<B andh`1i(x) =bh`1i(x) +`1i;8i;x: (13)Let`= maxj;x2DfcW`jh`1(x) +bb`jbe the maximum value of hidden units. If we initialize theparameters`;W`;b` 1`;cW`;bb`, then the marginal distribution becomesPh`j= 1jx;W`;b`= min`fcW`jh`1(x) +bb`j;1=1`fcW`jh`1(x) +bb`j;8j;x:From (13), it follows thatPh`j= 1jx;W`;b`=1`f cW`jbh`1(x) +XicW`ij`1i+bb`j!;8j;x:Similar to Lemma 4, there exists some constant `jsuch that`j<BN`1cW`maxandPh`j= 1jx;W`;b`=1`bh`j(x) +`j;8j;x: (14)15Under review as a conference paper at ICLR 2017Next, consider the upper hidden layer of stochastic layer. From Taylor’s theorem, there exists avaluezbetween 0andtsuch thats(x) =s(0) +s0(0)x+R(x), whereR(x) =s00(z)x22!. Since weconsider a binary random vector, i.e., h`2f0;1gN`, one can writeEP(h`jx)[s(k(h`))] =Xh`s(0) +s0(0)k(h`) +Rk(h`)P(h`jx)=s(0) +s0(0)0@XjW`+1jkP(h`j= 1jx) +b`+1k1A+Xh`R(k(h`))P(h`jx);wherek(h`) =W`+1kh`+b`+1kis the incoming signal. From (14) and above equation, for everyhidden unitk, we haveh`+1k(x;W`+1;b`+1)=f0@`+10@s0(0)0@1`0@XjW`+1jkbh`j(x) +XjW`+1jk`j1A+b`+1k1A+EP(h`jx)R(k(h`))1A1A:Since we assume that jf0(x)j<1, the following inequality holds:h`+1k(x;W`+1;b`+1)f0@`+1s0(0)0@1`XjW`+1ijbh`j(x) +b`+1j1A1A`+1s0(0)`XjW`+1jk`j+`+1EP(h`jx)R(k(h`))`+1s0(0)`XjW`+1jk`j+`+12EP(h`jx)hW`+1kh`+b`+1k2i; (15)where we usejs00(z)j<1for the last inequality. Therefore, it follows thath`+1k(x)bh`+1k(x)BN`1N`cW`maxcW`+1max+`N`cW`+1max+bb`+1max1`22s0(0)`+1;since we set`+1;W`+1;b`+1 `+1`s0(0);cW`+1`+1;1`bb`+1`+1. This completes the proof ofLemma 5.16
S1h7tgZEx
B1akgy9xx
ICLR.cc/2017/conference/-/paper157/official/review
{"title": "interesting connection between DNN and simplified SFNN but its practical significance is unknown", "rating": "6: Marginally above acceptance threshold", "review": "This paper builds connections between DNN, simplified stochastic neural network (SFNN) and SFNN and proposes to use DNN as the initialization model for simplified SFNN. The authors evaluated their model on several small tasks with positive results.\n\nThe connection between different models is interesting. I think the connection between sigmoid DNN and Simplified SFNN is the same as mean-field approximation that has been known for decades. However, the connection between ReLU DNN and simplified SFNN is novel.\n\nMy main concern is whether the proposed approach is useful when attacking real tasks with large training set. For tasks with small training set I can see that stochastic units would help generalize well.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Making Stochastic Neural Networks from Deterministic Ones
["Kimin Lee", "Jaehyung Kim", "Song Chong", "Jinwoo Shin"]
It has been believed that stochastic feedforward neural networks (SFNN) have several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due to their stochastic nature. However, training SFNN is notoriously harder. In this paper, we aim at developing efficient training methods for large-scale SFNN, in particular using known architectures and pre-trained parameters of DNN. To this end, we propose a new intermediate stochastic model, called Simplified-SFNN, which can be built upon any baseline DNN and approximates certain SFNN by simplifying its upper latent units above stochastic ones. The main novelty of our approach is in establishing the connection between three models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient training procedure of the stochastic models utilizing pre-trained parameters of DNN. Using several popular DNNs, we show how they can be effectively transferred to the corresponding stochastic models for both multi-modal and classification tasks on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million parameters, where the former consistently outperforms the latter for the classification tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.
["Deep learning", "Multi-modal learning", "Structured prediction"]
https://openreview.net/forum?id=B1akgy9xx
https://openreview.net/pdf?id=B1akgy9xx
https://openreview.net/forum?id=B1akgy9xx&noteId=S1h7tgZEx
Under review as a conference paper at ICLR 2017MAKING STOCHASTIC NEURAL NETWORKSFROM DETERMINISTIC ONESKimin Lee, Jaehyung Kim, Song Chong, Jinwoo ShinSchool of Electrical EngineeringKorea Advanced Institute of Science Technology, Republic of Koreafkiminlee, jaehyungkim, jinwoos g@kaist.ac.kr, songchong@kaist.eduABSTRACTIt has been believed that stochastic feedforward neural networks (SFNN) haveseveral advantages beyond deterministic deep neural networks (DNN): they havemore expressive power allowing multi-modal mappings and regularize better dueto their stochastic nature. However, training SFNN is notoriously harder. In thispaper, we aim at developing efficient training methods for large-scale SFNN, inparticular using known architectures and pre-trained parameters of DNN. To thisend, we propose a new intermediate stochastic model, called Simplified-SFNN,which can be built upon any baseline DNN and approximates certain SFNN bysimplifying its upper latent units above stochastic ones. The main novelty of ourapproach is in establishing the connection between three models, i.e., DNN !Simplified-SFNN!SFNN, which naturally leads to an efficient training pro-cedure of the stochastic models utilizing pre-trained parameters of DNN. Us-ing several popular DNNs, we show how they can be effectively transferred tothe corresponding stochastic models for both multi-modal and classification taskson MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, ourstochastic model built from the wide residual network has 28 layers and 36 millionparameters, where the former consistently outperforms the latter for the classifica-tion tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect.1 I NTRODUCTIONRecently, deterministic deep neural networks (DNN) have demonstrated state-of-the-art perfor-mance on many supervised tasks, e.g., speech recognition (Hinton et al., 2012a) and object recog-nition (Krizhevsky et al., 2012). One of the main components underlying these successes is on theefficient training methods for deeper and wider DNNs, which include backpropagation (Rumelhartet al., 1988), stochastic gradient descent (Robbins & Monro, 1951), dropout/dropconnect (Hintonet al., 2012b; Wan et al., 2013), batch/weight normalization (Ioffe & Szegedy, 2015; Salimans &Kingma, 2016), and various activation functions (Nair & Hinton, 2010; Gulcehre et al., 2016). Onthe other hand, stochastic feedforward neural networks (SFNN) (Neal, 1990) having random latentunits are often necessary in order to model complex stochastic natures in many real-world tasks, e.g.,structured prediction (Tang & Salakhutdinov, 2013), image generation (Goodfellow et al., 2014) andmemory networks (Zaremba & Sutskever, 2015). Furthermore, it has been believed that SFNN hasseveral advantages beyond DNN (Raiko et al., 2014): it has more expressive power for multi-modallearning and regularizes better for large-scale learning.Training large-scale SFNN is notoriously hard since backpropagation is not directly applicable. Cer-tain stochastic neural networks using continuous random units are known to be trainable efficientlyusing backpropagation under the variational techniques and the reparameterization tricks (Kingma& Welling, 2013). On the other hand, training SFNN having discrete, i.e., binary or multi-modal,random units is more difficult since intractable probabilistic inference is involved requiring too manyrandom samples. There have been several efforts developing efficient training methods for SFNNhaving binary random latent units (Neal, 1990; Saul et al., 1996; Tang & Salakhutdinov, 2013; Ben-gio et al., 2013; Raiko et al., 2014; Gu et al., 2015) (see Section 2.1 for more details). However,training SFNN is still significantly slower than doing DNN of the same architecture, e.g., most prior1Under review as a conference paper at ICLR 2017works on this line have considered a small number (at most 5 or so) of layers in SFNN. We aim forthe same goal, but our direction is orthogonal to them.Instead of training SFNN directly, we study whether pre-trained parameters of DNN (or easier mod-els) can be transferred to it, possibly with further fine-tuning of light cost. This approach can beattractive since one can utilize recent advances in DNN on its design and training. For example,one can design the network structure of SFNN following known specialized ones of DNN and usetheir pre-trained parameters. To this end, we first try transferring pre-trained parameters of DNNusing sigmoid activation functions to those of the corresponding SFNN directly. In our experiments,the heuristic reasonably works well. For multi-modal learning, SFNN under such a simple trans-formation outperforms DNN. Even for the MNIST classification, the former performs similarly asthe latter (see Section 2 for more details). However, it is questionable whether a similar strategyworks in general, particularly for other unbounded activation functions like ReLU (Nair & Hinton,2010) since SFNN has binary, i.e., bounded, random latent units. Moreover, it lost the regularizationbenefit of SFNN: it is rather believed that transferring parameters of stochastic models to DNN helpsits regularization, but the opposite direction is unlikely possible.To address the issues, we propose a special form of stochastic neural networks, named Simplified-SFNN, which intermediates between SFNN and DNN, having the following properties. First,Simplified-SFNN can be built upon any baseline DNN, possibly having unbounded activation func-tions. The most significant part of our approach lies in providing rigorous network knowledge trans-ferring (Chen et al., 2015) between Simplified-SFNN and DNN. In particular, we prove that param-eters of DNN can be transformed to those of the corresponding Simplified-SFNN while preservingthe performance, i.e., both represent the same mapping and features. Second, Simplified-SFNN ap-proximates certain SFNN, better than DNN, by simplifying its upper latent units above stochasticones using two different non-linear activation functions. Simplified-SFNN is much easier to trainthan SFNN while utilizing its stochastic nature for regularization.The above connection DNN !Simplified-SFNN!SFNN naturally suggests the following trainingprocedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune itscorresponding Simplified-SFNN initialized by the transformed DNN parameters. The pre-trainingstage accelerates the training task since DNN is faster to train than Simplified-SFNN. In addition,one can also utilize known DNN training techniques such as dropout and batch normalization forfine-tuning Simplified-SFNN. In our experiments, we train SFNN and Simplified-SFNN under theproposed strategy. They consistently outperform the corresponding DNN for both multi-modal andclassification tasks, where the former and the latter are for measuring the model expressive powerand the regularization effect, respectively. To the best of our knowledge, we are the first to confirmthat SFNN indeed regularizes better than DNN. We also construct the stochastic models followingthe same network structure of popular DNNs including Lenet-5 (LeCun et al., 1998), NIN (Linet al., 2014) and WRN (Zagoruyko & Komodakis, 2016). In particular, WRN (wide residual net-work) of 28 layers and 36 million parameters has shown the state-of-art performances on CIFAR-10and CIFAR-100 classification datasets, and our stochastic models built upon WRN outperform thedeterministic WRN on the datasets.Organization. In Section 2, we focus on DNNs having sigmoid and ReLU activation functions andstudy simple transformations of their parameters to those of SFNN. In Section 3, we consider DNNshaving general activation functions and describe more advanced transformations via introducing anew model, named Simplified-SFNN.2 S IMPLE TRANSFORMATION FROM DNN TOSFNN2.1 P RELIMINARIES FOR SFNNStochastic feedforward neural network (SFNN) is a hybrid model, which has both stochastic binaryand deterministic hidden units. We first introduce SFNN with one stochastic hidden layer (andwithout deterministic hidden layers) for simplicity. Throughout this paper, we commonly denotethe bias for unit iand the weight matrix of the `-th hidden layer by b`iandW`, respectively. Then,the stochastic hidden layer in SFNN is defined as a binary random vector with N1units, i.e., h122Under review as a conference paper at ICLR 2017f0;1gN1, drawn under the following distribution:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx=W1ix+b1i: (1)In the above, xis the input vector and (x) = 1=(1 +ex)is the sigmoid function. Our conditionaldistribution of the output yis defined as follows:P(yjx) =EP(h1jx)Pyjh1=EP(h1jx)NyjW2h1+b2; 2y;whereN()denotes the normal distribution with mean W2h1+b2and (fixed) variance 2y. There-fore,P(yjx)can express a very complex, multi-modal distribution since it is a mixture of expo-nentially many normal distributions. The multi-layer extension is straightforward via a combinationof stochastic and deterministic hidden layers, e.g., see Tang & Salakhutdinov (2013), Raiko et al.(2014). Furthermore, one can use any other output distributions as like DNN, e.g., softmax forclassification tasks.There are two computational issues for training SFNN: computing expectations with respect tostochastic units in forward pass and computing gradients in backward pass. One can notice that bothare computationally intractable since they require summations over exponentially many configura-tions of all stochastic units. First, in order to handle the issue in forward pass, one can use the follow-ing Monte Carlo approximation for estimating the expectation: P(yjx)w1MMPm=1P(yjh(m));where h(m)Ph1jxandMis the number of samples. This random estimator is unbiased andhas relatively low variance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on thedimensionality of h1and one can draw samples from the exact distribution. Next, in order to handlethe issue in backward pass, Neal (1990) proposed a Gibbs sampling, but it is known that it oftenmixes poorly. Saul et al. (1996) proposed a variational learning based on the mean-field approxi-mation, but it has additional parameters making the variational lower bound looser. More recently,several other techniques have been proposed including unbiased estimators of the variational boundusing importance sampling (Tang & Salakhutdinov, 2013; Raiko et al., 2014) and biased/unbiasedestimators of the gradient for approximating backpropagation (Bengio et al., 2013; Raiko et al.,2014; Gu et al., 2015).2.2 S IMPLE TRANSFORMATION FROM SIGMOID -DNN AND RELU-DNN TOSFNNDespite the recent advances, training SFNN is still very slow compared to DNN due to the samplingprocedures: in particular, it is notoriously hard to train SFNN when the network structure is deeperand wider. In order to handle these issues, we consider the following approximation:P(yjx) =EP(h1jx)NyjW2h1+b2; 2ywNyjEP(h1jx)W2h1+b2; 2y=NyjW2W1x+b1+b2; 2y:(2)Note that the above approximation corresponds to replacing stochastic units by deterministic onessuch that their hidden activation values are same as marginal distributions of stochastic units, i.e.,SFNN can be approximated by DNN using sigmoid activation functions, say sigmoid-DNN. Whenthere exist more latent layers above the stochastic one, one has to apply similar approximations toall of them, i.e., exchanging the orders of expectations and non-linear functions, for making DNNand SFNN are equivalent. Therefore, instead of training SFNN directly, one can try transferring pre-trained parameters of sigmoid-DNN to those of the corresponding SFNN directly: train sigmoid-DNN instead of SFNN, and replace deterministic units by stochastic ones for the inference purpose.Although such a strategy looks somewhat ‘rude’, it was often observed in the literature that it rea-sonably works well for SFNN (Raiko et al., 2014) and we also evaluate it as reported in Table 1. Wealso note that similar approximations appear in the context of dropout: it trains a stochastic modelaveraging exponentially many DNNs sharing parameters, but also approximates a single DNN well.Now we investigate a similar transformation in the case when DNN uses the unbounded ReLUactivation function, say ReLU-DNN. Many recent deep networks are of ReLU-DNN type due tothe gradient vanishing problem, and their pre-trained parameters are often available. Although itis straightforward to build SFNN from sigmoid-DNN, it is less clear in this case since ReLU is3Under review as a conference paper at ICLR 2017x0 0.2 0.4 0.6 0.8 1y00.511.5Training dataSamples from sigmoid-DNN(a)x0 0.2 0.4 0.6 0.8 1y00.511.5Training dataSamples from SFNN (sigmoid activation) (b)Figure 1: The generated samples from (a) sigmoid-DNN and (b) SFNN which uses same parameterstrained by sigmoid-DNN. One can note that SFNN can model the multiple modes in outupt space yaroundx= 0:4.Inference Model Network StructureMNIST Classification Multi-modal LearningTraining NLL Training Error ( %) Test Error ( %) Test NLLsigmoid-DNN 2 hidden layers 0 0 1.54 5.290SFNN 2 hidden layers 0 0 1.56 1.564sigmoid-DNN 3 hidden layers 0.002 0.03 1.84 4.880SFNN 3 hidden layers 0.022 0.04 1.81 0.575sigmoid-DNN 4 hidden layers 0 0.01 1.74 4.850SFNN 4 hidden layers 0.003 0.03 1.73 0.392ReLU-DNN 2 hidden layers 0.005 0.04 1.49 7.492SFNN 2 hidden layers 0.819 4.50 5.73 2.678ReLU-DNN 3 hidden layers 0 0 1.43 7.526SFNN 3 hidden layers 1.174 16.14 17.83 4.468ReLU-DNN 4 hidden layers 0 0 1.49 7.572SFNN 4 hidden layers 1.213 13.13 14.64 1.470Table 1: The performance of simple parameter transformations from DNN to SFNN on the MNISTand synthetic datasets, where each layer of neural networks contains 800 and 50 hidden units fortwo datasets, respectively. For all experiments, the only first hidden layer of DNN is replaced bystochastic one. We report negative log-likelihood (NLL) and classification error rates.unbounded. To handle this issue, we redefine the stochastic latent units of SFNN:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx= minfW1ix+b1i;1:(3)In the above, f(x) = maxfx;0gis the ReLU activation function and is some hyper-parameter. Asimple transformation can be defined similarly as the case of sigmoid-DNN via replacing determin-istic units by stochastic ones. However, to preserve the parameter information of ReLU-DNN, onehas to choose such thatfW1ix+b1i1and rescale upper parameters W2as follows:1 maxi;xfcW1ix+bb1i;W1;b1 cW1;bb1;W2;b2 cW2=;bb2:(4)Then, applying similar approximations as in (2), i.e., exchanging the orders of expectations andnon-linear functions, one can observe that ReLU-DNN and SFNN are equivalent.We evaluate the performance of the simple transformations from DNN to SFNN on the MNISTdataset (LeCun et al., 1998) and the synthetic dataset (Bishop, 1994), where the former and the latterare popular datasets used for a classification task and a multi-modal (i.e., one-to-many mappings)prediction learning, respectively. In all experiments reported in this paper, we commonly use thesoftmax and Gaussian with standard deviation of y= 0:05are used for the output probabilityon classification and regression tasks, respectively. The only first hidden layer of DNN is replacedby stochastic one, and we use 500 samples for estimating the expectations in the SFNN inference.As reported in Table 1, we observe that the simple transformation often works well for both tasks:the SFNN and sigmoid-DNN inferences (using same parameters trained by sigmoid-DNN) performsimilarly for the classification task and the former significantly outperforms for the latter for the4Under review as a conference paper at ICLR 2017multi-modal task (also see Figure 1). It might suggest some possibilities that the expensive SFNNtraining might not be not necessary, depending on the targeted learning quality. However, in case ofReLU, SFNN performs much worse than ReLU-DNN for the MNIST classification task under theparameter transformation.3 T RANSFORMATION FROM DNN TOSFNN VIASIMPLIFIED -SFNNIn this section, we propose an advanced method to utilize the pre-trained parameters of DNN fortraining SFNN. As shown in the previous section, simple parameter transformations from DNN toSFNN are not clear to work in general, in particular for activation functions other than sigmoid.Moreover, training DNN does not utilize the stochastic regularizing effect, which is an importantbenefit of SFNN. To address the issues, we design an intermediate model, called Simplified-SFNN.The proposed model is a special form of stochastic neural networks, which approximates certainSFNN by simplifying its upper latent units above stochastic ones. Then, we establish more rigorousconnections between three models: DNN !Simplified-SFNN!SFNN, which leads to an effi-cient training procedure of the stochastic models utilizing pre-trained parameters of DNN. In ourexperiments, we evaluate the strategy for various tasks and popular DNN architectures.3.1 S IMPLIFIED -SFNN OF TWO HIDDEN LAYERS AND NON -NEGATIVE ACTIVATIONFUNCTIONSFor clarity of presentation, we first introduce Simplified-SFNN with two hidden layers and non-negative activation functions, where its extensions to multiple layers and general activation functionsare presented in Appendix B. We also remark that we primarily describe fully-connected Simplified-SFNNs, but their convolutional versions can also be naturally defined. In Simplified-SFNN of twohidden layers, we assume that the first and second hidden layers consist of stochastic binary hiddenunits and deterministic ones, respectively. As like (3), the first layer is defined as a binary randomvector withN1units, i.e., h12f0;1gN1, drawn under the following distribution:Ph1jx=N1Yi=1Ph1ijx; wherePh1i= 1jx= min1fW1ix+b1i;1:(5)where xis the input vector, 1>0is a hyper-parameter for the first layer, and f:R!R+is somenon-negative non-linear activation function with jf0(x)j1for allx2R, e.g., ReLU and sigmoidactivation functions. Now the second layer is defined as the following deterministic vector with N2units, i.e., h2(x)2RN2:h2(x) =f2EP(h1jx)sW2jh1+b2js(0):8j2N2; (6)where2>0is a hyper-parameter for the second layer and s:R!Ris a differentiable functionwithjs00(x)j1for allx2R, e.g., sigmoid and tanh functions. In our experiments, we use thesigmoid function for s(x). Here, one can note that the proposed model also has the same computa-tional issues with SFNN in forward and backward passes due to the complex expectation. One cantrain Simplified-SFNN similarly as SFNN: we use Monte Carlo approximation for estimating theexpectation and the (biased) estimator of the gradient for approximating backpropagation inspiredby Raiko et al. (2014) (more detailed explanation is presented in Appendix A).We are interested in transferring parameters of DNN to Simplified-SFNN to utilize the trainingbenefits of DNN since the former is much faster to train than the latter. To this end, we consider thefollowing DNN of which `-th hidden layer is deterministic and defined as follows:bh`(x) =hbh`i(x) =fcW`ibh`1(x) +bb`i:i2N`i; (7)wherebh0(x) =x. As stated in the following theorem, we establish a rigorous way how to initializeparameters of Simplified-SFNN in order to transfer the knowledge stored in DNN.Theorem 1 Assume that both DNN and Simplified-SFNN with two hidden layers have same networkstructure with non-negative activation function f. Given parameters fcW`;bb`:`= 1;2gof DNNand input dataset D, choose those of Simplified-SFNN as follows:1;W1;b1 11;cW1;bb1;2;W2;b2 21s0(0);12cW2;112bb2;(8)5Under review as a conference paper at ICLR 2017InputLayer 1OutputLayer 2InputLayer 1OutputLayer 2: Stochastic layer: Stochasticity: Deterministic layer(a)Epoch0 50 100 150 200 250Test Error [%]11.522.533.5Baseline ReLU-DNNReLU-DNN* trained by ReLU-DNN*ReLU-DNN* trained by Simplified-SFNN (b)The value of γ20 1 2 3 4 5 10 50 100Knowledge Transferring Loss05101520253035# of samples = 1000 (c)Figure 2: (a) Simplified-SFNN (top) and SFNN (bottom). (b) For first 200 epochs, we train abaseline ReLU-DNN. Then, we train simplified-SFNN initialized by the DNN parameters undertransformation (8) with 2= 50 . We observe that training ReLU-DNNdirectly does not reducethe test error even when network knowledge transferring still holds between the baseline ReLU-DNN and the corresponding ReLU-DNN. (c) As the value of 2increases, knowledge transferringloss measured as1jDj1N`PxPih`i(x)bh`i(x)is decreasing.where1= maxi;x2DfcW1ix+bb1iand2>0is any positive constant. Then, it follows thath2j(x)bh2j(x)1PicW2ij+bb2j1122s0(0)2;8j;x2D:The proof of the above theorem is presented in Appendix D.1. Our proof is built upon thefirst-order Taylor expansion of non-linear function s(x). Theorem 1 implies that one can makeSimplified-SFNN represent the function values of DNN with bounded errors using a linear trans-formation. Furthermore, the errors can be made arbitrarily small by choosing large 2, i.e.,lim2!1h2j(x)bh2j(x)= 0;8j;x2D:Figure 2(c) shows that knowledge transferring loss de-creases as2increases on MNIST classification. Based on this, we choose 2= 50 commonly forall experiments.3.2 W HYSIMPLIFIED -SFNN ?Given a Simplified-SFNN model, the corresponding SFNN can be naturally defined by taking out theexpectation in (6). As illustrated in Figure 2(a), the main difference between SFNN and Simplified-SFNN is that the randomness of the stochastic layer propagates only to its upper layer in the latter,i.e., the randomness of h1is averaged out at its upper units h2and does not propagate to h3or outputy. Hence, Simplified-SFNN is no longer a Bayesian network. This makes training Simplified-SFNNmuch easier than SFNN since random samples are not required at some layers1and consequentlythe quality of gradient estimations can also be improved, in particular for unbounded activationfunctions. Furthermore, one can use the same approximation procedure (2) to see that Simplified-SFNN approximates SFNN. However, since Simplified-SFNN still maintains binary random units,it uses approximation steps later, in comparison with DNN. In summary, Simplified-SFNN is anintermediate model between DNN and SFNN, i.e., DNN !Simplified-SFNN!SFNN.The above connection naturally suggests the following training procedure for both SFNN andSimplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNNinitialized by the transformed DNN parameters. Finally, the fine-tuned parameters can be used forSFNN as well. We evaluate the strategy for the MNIST classification, which is reported in Table 2(see Appendix C for more detailed experiment setups). We found that SFNN under the two-stagetraining always performs better than SFNN under a simple transformation (4) from ReLU-DNN.1For example, if one replaces the first feature maps in the fifth residual unit of Pre-ResNet having 164layers (He et al., 2016) by stochastic ones, then the corresponding DNN, Simplified-SFNN and SFNN took 1mins 35 secs, 2 mins 52 secs and 16 mins 26 secs per each training epoch, respectively, on our machine withone Intel CPU (Core i7-5820K 6-Core@3.3GHz) and one NVIDIA GPU (GTX Titan X, 3072 CUDA cores).Here, we trained both stochastic models using the biased estimator (Raiko et al., 2014) with 10 random sampleson CIFAR-10 dataset.6Under review as a conference paper at ICLR 2017Inference Model Training Model Network Structure without BN & DO with BN with DOsigmoid-DNN sigmoid-DNN 2 hidden layers 1.54 1.57 1.25SFNN sigmoid-DNN 2 hidden layers 1.56 2.23 1.27Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.5 1.11sigmoid-DNNfine-tuned by Simplified-SFNN 2 hidden layers 1.48 (0.06) 1.48 (0.09) 1.14 (0.11)SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.57 1.11ReLU-DNN ReLU-DNN 2 hidden layers 1.49 1.25 1.12SFNN ReLU-DNN 2 hidden layers 5.73 3.47 1.74Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.41 1.17 1.06ReLU-DNNfine-tuned by Simplified-SFNN 2 hidden layers 1.32 (0.17) 1.16 (0.09) 1.05 (0.07)SFNN fine-tuned by Simplified-SFNN 2 hidden layers 2.63 1.34 1.51ReLU-DNN ReLU-DNN 3 hidden layers 1.43 1.34 1.24SFNN ReLU-DNN 3 hidden layers 17.83 4.15 1.49Simplified-SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.28 1.25 1.04ReLU-DNNfine-tuned by Simplified-SFNN 3 hidden layers 1.27 (0.16) 1.24 (0.1) 1.03 (0.21)SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.56 1.82 1.16ReLU-DNN ReLU-DNN 4 hidden layers 1.49 1.34 1.30SFNN ReLU-DNN 4 hidden layers 14.64 3.85 2.17Simplified-SFNN fine-tuned by Simplified-SFNN 4 hidden layers 1.32 1.32 1.25ReLU-DNNfine-tuned by Simplified-SFNN 4 hidden layers 1.29 (0.2) 1.29 (0.05) 1.25 (0.05)SFNN fine-tuned by Simplified-SFNN 4 hidden layers 3.44 1.89 1.56Table 2: Classification test error rates [ %] on MNIST, where each layer of neural networks contains800 hidden units. All Simplified-SFNNs are constructed by replacing the first hidden layer of a base-line DNN with stochastic hidden layer. We also consider training DNN and fine-tuning Simplified-SFNN using batch normalization (BN) and dropout (DO). The performance improvements beyondbaseline DNN (due to fine-tuning DNN parameters under Simplified-SFNN) are calculated in thebracket.More interestingly, Simplified-SFNN consistently outperforms its baseline DNN due to the stochas-tic regularizing effect, even when we train both models using dropout (Hinton et al., 2012b) andbatch normalization (Ioffe & Szegedy, 2015). In order to confirm the regularization effects, one canagain approximate a trained Simplified-SFNN by a new deterministic DNN which we call DNNand is different from its baseline DNN under the following approximation at upper latent units abovebinary random units:EP(h`jx)sW`+1jh`wsEP(h`jx)W`+1jh`=s XiW`+1ijPh`i= 1jx!:(9)We found that DNNusing fined-tuned parameters of Simplified-SFNN also outperforms the base-line DNN as shown in Table 2 and Figure 2(b).3.3 E XPERIMENTAL RESULTS ON MULTI -MODAL LEARNING AND CONVOLUTIONALNETWORKSWe present several experimental results for both multi-modal and classification tasks on MNIST(LeCun et al., 1998), Toronto Face Database (TFD) (Susskind et al., 2010), CIFAR-10, CIFAR-100(Krizhevsky & Hinton, 2009) and SVHN (Netzer et al., 2011). Here, we present some key resultsdue to the space constraints and more detailed explanations for our experiment setups are presentedin Appendix C.We first verify that it is possible to learn one-to-many mapping via Simplified-SFNN on the TFDand MNIST datasets, where the former and the latter are used to predict multiple facial expressionsfrom the mean of face images per individual and the lower half of the MNIST digit given the upperhalf, respectively. We remark that both tasks are commonly performed in recent other works totest the multi-modal learning using SFNN (Raiko et al., 2014; Gu et al., 2015). In all experiments,we first train a baseline DNN, and the trained parameters of DNN are used for further fine-tuningthose of Simplified-SFNN. As shown in Table 3 and Figure 3, stochastic models outperform theirbaseline DNN, and generate different digits for the case of ambiguous inputs (while DNN doesnot). We also evaluate the regularization effect of Simplified-SFNN for the classification tasks onCIFAR-10, CIFAR-100 and SVHN. Table 4 reports the classification error rates using convolutionalneural networks such as Lenet-5 (LeCun et al., 1998), NIN (Lin et al., 2014) and WRN (Zagoruyko& Komodakis, 2016). Due to the regularization effects, Simplified-SFNNs consistently outperform7Under review as a conference paper at ICLR 2017Inference Model Training ModelMNIST-half TFD2 hidden layers 3 hidden layers 2 hidden layers 3 hidden layerssigmoid-DNN sigmoid-DNN 1.409 1.720 -0.064 0.005SFNN sigmoid-DNN 0.644 1.076 -0.461 -0.401Simplified-SFNN fine-tuned by Simplified-SFNN 1.474 1.757 -0.071 -0.028SFNN fine-tuned by Simplified-SFNN 0.619 0.991 -0.509 -0.423ReLU-DNN ReLU-DNN 1.747 1.741 1.271 1.232SFNN ReLU-DNN -1.019 -1.021 0.823 1.121Simplified-SFNN fine-tuned by Simplified-SFNN 2.122 2.226 0.175 0.343SFNN fine-tuned by Simplified-SFNN -1.290 -1.061 -0.380 -0.193Table 3: Test negative log-likelihood (NLL) on MNIST-half and TFD datasets, where each layer ofneural networks contains 200 hidden units. All Simplified-SFNNs are constructed by replacing thefirst hidden layer of a baseline DNN with stochastic hidden layer.Figure 3: Generated samples for predicting the lower half of the MNIST digit given the upper half.The original digits and the corresponding inputs (first). The generated samples from sigmoid-DNN(second), SFNN under the simple transformation (third), and SFNN fine-tuned by Simplified-SFNN(forth). We observed that SFNN fine-tuned by Simplified-SFNN can generate more various samplesfrom same inputs, e.g., 3 and 8, better than SFNN under the simple transformation.InferencemodelTraining Model CIFAR-10 CIFAR-100 SVHNLenet-5 Lenet-5 37.67 77.26 11.18Lenet-5Simplified-SFNN 33.58 73.02 9.88NIN NIN 9.51 32.66 3.21NINSimplified-SFNN 9.33 30.81 3.01WRN WRN 4.22 (4.39)y20.30 (20.04)y 3.25yWRN Simplified-SFNN(one stochastic layer)4.21y 19.98y 3.09yWRN Simplified-SFNN(two stochastic layers)4.14y 19.72y 3.06yTable 4: Test error rates [ %] on CIFAR-10, CIFAR-100 andSVHN. The error rates for WRN are from our experiments,where original ones reported in (Zagoruyko & Komodakis,2016) are in the brackets. Results with yare obtained usingthe horizontal flipping and random cropping augmentation.Epoch0 50 100 150 200Test Error [%]19.52020.52121.522WRN* trained by Simplified-SFNN (one stochastic layer)WRN* trained by Simplified-SFNN (two stochastic layers)Bseline WRNFigure 4: Test errors of WRNper eachtraining epoch on CIFAR-100.their baseline DNNs. For example, WRNoutperforms WRN by 0.08 %on CIFAR-10 and 0.58 %onCIFAR-100. We expect that if one introduces more stochastic layers, the error would be decreasedmore (see Figure 4), but it increases the fine-tuning time-complexity of Simplified-SFNN.4 C ONCLUSIONIn order to develop an efficient training method for large-scale SFNN, this paper proposes a newintermediate stochastic model, called Simplified-SFNN. We establish the connection between threemodels, i.e., DNN !Simplified-SFNN!SFNN, which naturally leads to an efficient trainingprocedure of the stochastic models utilizing pre-trained parameters of DNN. This connection natu-rally leads an efficient training procedure of the stochastic models utilizing pre-trained parametersand architectures of DNN. We believe that our work brings a new important direction for trainingstochastic neural networks, which should be of broader interest in many related applications.8Under review as a conference paper at ICLR 2017REFERENCESYoshua Bengio, Nicholas L ́eonard, and Aaron Courville. Estimating or propagating gradients through stochas-tic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.Christopher M Bishop. Mixture density networks. 1994.Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer.arXiv preprint arXiv:1511.05641 , 2015.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process-ing Systems (NIPS) , 2014.Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochas-tic neural networks. arXiv preprint arXiv:1511.05176 , 2015.Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. arXivpreprint arXiv:1603.00391 , 2016.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXivpreprint arXiv:1603.05027 , 2016.Geoffrey E Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Se-nior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modelingin speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine , 2012a.Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improvingneural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 , 2012b.Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducinginternal covariate shift. International Conference on Machine Learning (ICML) , 2015.Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 , 2013.Alex Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images. Master’sthesis, Department of Computer Science, University of Toronto , 2009.Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neuralnetworks. In Advances in Neural Information Processing Systems (NIPS) , 2012.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 1998.Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. International Conference on Learning Repre-sentations (ICLR) , 2014.Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Interna-tional Conference on Machine Learning (ICML) , 2010.Radford M Neal. Learning stochastic feedforward networks. Department of Computer Science, University ofToronto , 1990.Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits innatural images with unsupervised feature learning. NIPS Workshop on Deep Learning and UnsupervisedFeature Learning , 2011.Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochasticfeedforward neural networks. arXiv preprint arXiv:1406.2989 , 2014.Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics ,1951.David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by errorpropagation. Technical report, MIT Press, 1988.9Under review as a conference paper at ICLR 2017Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train-ing of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks.Journal of artificial intelligence research , 1996.Josh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department ofComputer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep , 2010.Yichuan Tang and Ruslan R Salakhutdinov. Learning stochastic feedforward neural networks. In Advances inNeural Information Processing Systems (NIPS) , 2013.Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks usingdropconnect. In International Conference on Machine Learning (ICML) , 2013.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146 , 2016.Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprintarXiv:1505.00521 , 2015.A T RAINING SIMPLIFIED -SFNNThe parameters of Simplified-SFNN can be learned using a variant of the backpropagation algorithm(Rumelhart et al., 1988) in a similar manner to DNN. However, in contrast to DNN, there are twocomputational issues for simplified-SFNN: computing expectations with respect to stochastic unitsin forward pass and computing gradients in back pass. One can notice that both are intractable sincethey require summations over all possible configurations of all stochastic units. First, in order tohandle the issue in forward pass, we use the following Monte Carlo approximation for estimatingthe expectation:EP(h1jx)sW2jh1+b2jw1MMXm=1sW2jh(m)+b2j; h(m)Ph1jx;whereMis the number of samples. This random estimator is unbiased and has relatively lowvariance (Tang & Salakhutdinov, 2013) since its accuracy does not depend on the dimensionality ofh1and one can draw samples from the exact distribution. Next, in order to handle the issue in backpass, we use the following approximation inspired by (Raiko et al., 2014):@@W2jEP(h1jx)sW2jh1+b2jw1MXm@@W2jsW2jh(m)+b2j;@@W1iEP(h1jx)sW2jh1+b2jwW2ijMXms0W2jh(m)+b2j@@W1iPh1i= 1jx;where h(m)Ph1jxandMis the number of samples. In our experiments, we commonlychooseM= 20 .B E XTENSIONS OF SIMPLIFIED -SFNNIn this section, we describe how the network knowledge transferring between Simplified-SFNN andDNN, i.e., Theorem 1, generalizes to multiple layers and general activation functions.B.1 E XTENSION TO MULTIPLE LAYERSA deeper Simplified-SFNN with Lhidden layers can be defined similarly as the case of L= 2. Wealso establish network knowledge transferring between Simplified-SFNN and DNN with Lhiddenlayers as stated in the following theorem. Here, we assume that stochastic layers are not consecutivefor simpler presentation, but the theorem is generalizable for consecutive stochastic layers.10Under review as a conference paper at ICLR 2017Theorem 2 Assume that both DNN and Simplified-SFNN with Lhidden layers have same networkstructure with non-negative activation function f. Given parameters fcW`;bb`:`= 1;:::;LgofDNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them foreach`-th stochastic layer and its upper layer as follows:` 1`;`+1;W`+1;b`+1 ``+1s0(0);cW`+1`+1;bb`+1``+1!;where`= maxi;x2DfcW`ih`1(x) +bb`iand`+1is any positive constant. Then, it follows thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:The above theorem again implies that it is possible to transfer knowledge from DNN to Simplified-SFNN by choosing large l+1. The proof of Theorem 2 is similar to that of Theorem 1 and given inAppendix D.2.B.2 E XTENSION TO GENERAL ACTIVATION FUNCTIONSIn this section, we describe an extended version of Simplified-SFNN which can utilize any activationfunction. To this end, we modify the definitions of stochastic layers and their upper layers byintroducing certain additional terms. If the `-th hidden layer is stochastic, then we slightly modifythe original definition (5) as follows:Ph`jx=N`Yi=1Ph`ijxwithPh`i= 1jx= min`fW1ix+b1i+12;1;wheref:R!Ris a non-linear (possibly, negative) activation function with jf0(x)j1for allx2R. In addition, we re-define its upper layer as follows:h`+1(x) ="f `+1 EP(h`jx)sW`+1jh`+b`+1js(0)s0(0)2XiW`+1ij!!:8j#;where h0(x) =xands:R!Ris a differentiable function with js00(x)j1for allx2R.Under this general Simplified-SFNN model, we also show that transferring network knowledge fromDNN to Simplified-SFNN is possible as stated in the following theorem. Here, we again assumethat stochastic layers are not consecutive for simpler presentation.Theorem 3 Assume that both DNN and Simplified-SFNN with Lhidden layers have same networkstructure with non-linear activation function f. Given parameters fcW`;bb`:`= 1;:::;Lgof DNNand input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each`-th stochastic layer and its upper layer as follows:` 12`;`+1;W`+1;b`+1 2``+1s0(0);cW`+1`+1;bb`+12``+1!;where`= maxi;x2DfcW`ih`1(x) +bb`i, and`+1is any positive constant. Then, it follows thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:We omit the proof of the above theorem since it is somewhat direct adaptation of that of Theorem 2.C E XPERIMENTAL SETUPSIn this section, we describe detailed explanation about all the experiments described in Section 3.In all experiments, the softmax and Gaussian with the standard deviation of 0.05 are used as theoutput probability for the classification task and the multi-modal prediction, respectively. The losswas minimized using ADAM learning rule (Kingma & Ba, 2014) with a mini-batch size of 128. Weused an exponentially decaying learning rate.11Under review as a conference paper at ICLR 2017C.1 C LASSIFICATION ON MNISTThe MNIST dataset consists of 2828pixel greyscale images, each containing a digit 0 to 9 with60,000 training and 10,000 test images. For this experiment, we do not use any data augmentationor pre-processing. Hyper-parameters are tuned on the validation set consisting of the last 10,000training images. All Simplified-SFNNs are constructed by replacing the first hidden layer of abaseline DNN with stochastic hidden layer. As described in Section 3.2, we train Simplified-SFNNsunder the two-stage procedure: first train a baseline DNN for first 200 epochs, and the trainedparameters of DNN are used for initializing those of Simplified-SFNN. For 50 epochs, we trainsimplified-SFNN. We choose the hyper-parameter 2= 50 in the parameter transformation. AllSimplified-SFNNs are trained with M= 20 samples at each epoch, and in the test, we use 500samples.C.2 M ULTI -MODAL REGRESSION ON TFD AND MNISTThe Toronto Face Database (TFD) (Susskind et al., 2010) dataset consists of 4848pixel greyscaleimages, each containing a face image of 900 individuals with 7 different expressions. Similar to(Raiko et al., 2014), we use 124 individuals with at least 10 facial expressions as data. We randomlychoose 100 individuals with 1403 images for training and the remaining 24 individuals with 326images for the test. We take the mean of face images per individual as the input and set the outputas the different expressions of the same individual. The MNIST dataset consists of 2828pixelgreyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. Forthis experiments, each pixel of every digit images is binarized using its grey-scale value. We take theupper half of the MNIST digit as the input and set the output as the lower half of it. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hiddenlayer. We train Simplified-SFNNs with M= 20 samples at each epoch, and in the test, we use 500samples. We use 200 hidden units for each layer of neural networks in two experiments. Learningrate is chosen from f0.005 , 0.002, 0.001, ... , 0.0001 g, and the best result is reported for both tasks.C.3 C LASSIFICATION ON CIFAR-10, CIFAR-100 AND SVHNThe CIFAR-10 and CIFAR-100 datasets consist of 50,000 training and 10,000 test images. TheSVHN dataset consists of 73,257 training and 26,032 test images.2We pre-process the data usingglobal contrast normalization and ZCA whitening. For these datasets, we design a convolutionalversion of Simplified-SFNN. In a similar manner to the case of fully-connected networks, one candefine a stochastic convolution layer, which considers the input feature map as a binary random ma-trix and generates the output feature map as defined in (6). All Simplified-SFNNs are constructed byreplacing a hidden feature map of a baseline models, i.e., Lenet-5, NIN and WRN, with stochasticone as shown in Figure 5(d). We use WRN with 16 and 28 layers for SVHN and CIFAR datasets, re-spectively, since they showed state-of-the-art performance as reported by Zagoruyko & Komodakis(2016). In case of WRN, we introduce up to two stochastic convolution layers.For 100 epochs, wefirst train baseline models, i.e., Lenet-5, NIN and WRN, and trained parameters are used for ini-tializing those of Simplified-SFNNs. All Simplified-SFNNs are trained with M= 5 samples andthe test error is only measured by the approximation (9). The test errors of baseline models aremeasured after training them for 200 epochs similar to Zagoruyko & Komodakis (2016).D P ROOFS OF THEOREMSD.1 P ROOF OF THEOREM 1First consider the first hidden layer, i.e., stochastic layer. Let 1= maxi;x2DfcW1ix+bb1ibethe maximum value of hidden units in DNN. If we initialize the parameters1;W1;b1 11;cW1;bb1, then the marginal distribution of each hidden unit ibecomesPh1i= 1jx;W1;b1=min1fcW1ix+bb1i;1=11fcW1ix+bb1i;8i;x2D:(10)2We do not use the extra SVHN dataset for training.12Under review as a conference paper at ICLR 2017[Convolution (Conv.)] [Fully -connected] [Fully -connected] [Fully -connected] [Max pool] [Stochastic (Stoc .) Conv. ] [Max pool]6 feature maps (f. maps)Input Output6 Stochastic (Stoc .) f. maps84 units16 f. maps16 f. maps120 unitsA(a)[Conv.] [Conv.] [Conv.] [Max pool] [Conv.] [Conv.] [Conv.] [Stoc. Conv. ][Avg pool] [Avg pool]160f. maps96f. maps192f. maps192f. maps192Stoc. f. maps10f. mapsOutput Input192f. maps96f. maps192f. maps192f. mapsA[Conv.] [Conv.]192f. maps(b)InputA16f. maps[Conv.]64∗2uu−1f. maps[Conv.]64∗2uu−1,f. maps64∗2uu−1f. maps64∗2uu−1256f. mapsOutputStoc. f. maps[Conv.] [Conv.][Stoc. Conv. ][Avg pool] [Fully -connected][Conv.]eeeeeeee×3(uu=1,2,3)iiii(vv′≤3&uu=3)64∗2uu−1Stoc. f. maps[Conv.]eeeeeeeeiiii(vv′≤2&uu=3)[Stoc. Conv. ](c)InputA16f. maps[Conv.]160∗2uu−1f. maps160∗2uu−1f. maps[Conv.]160∗2uu−1,f. maps160∗2uu−1f. maps160∗2uu−1640f. mapsOutputStoc. f. maps[Conv.] [Conv.] [Conv.][Stoc. Conv. ][Avg pool] [Fully -connected][Conv.]eeeeeeee×3(vv=1,2,3)×3(uu=1,2,3)iiii(vv≥vv′&uu=3)(d)Figure 5: The overall structures of (a) Lenet-5, (b) NIN, (c) WRN with 16 layers, and (d) WRN with28 layers. The red feature maps correspond to the stochastic ones. In case of WRN, we introduceone (v0= 3) and two (v0= 2) stochastic feature maps.Next consider the second hidden layer. From Taylor’s theorem, there exists a value zbetween 0andxsuch thats(x) =s(0) +s0(0)x+R(x), whereR(x) =s00(z)x22!. Since we consider a binaryrandom vector, i.e., h12f0;1gN1, one can writeEP(h1jx)sjh1=Xh1s(0) +s0(0)jh1+Rjh1Ph1jx=s(0) +s0(0) XiW2ijP(h1i= 1jx) +b2j!+EP(h1jx)R(j(h1));wherejh1:=W2jh1+b2jis the incoming signal. From (6) and (10), for every hidden unit j, itfollows thath2jx;W2;b2=f 2 s0(0) 11XiW2ijbh1i(x) +b2j!+EP(h1jx)Rjh1!!:13Under review as a conference paper at ICLR 2017Since we assume that jf0(x)j1, the following inequality holds:h2j(x;W2;b2)f 2s0(0) 11XiW2ijbh1i(x) +b2j!!2EP(h1jx)R(j(h1))22EP(h1jx)hW2jh1+b2j2i;where we usejs00(z)j<1for the last inequality. Therefore, it follows thath2jx;W2;b2bh2jx;cW2;bb21PicW2ij+bb2j1122s0(0)2;8j;since we set2;W2;b2 21s0(0);cW22;112bb2. This completes the proof of Theorem 1.D.2 P ROOF OF THEOREM 2For the proof of Theorem 2, we first state the two key lemmas on error propagation in Simplified-SFNN.Lemma 4 Assume that there exists some positive constant Bsuch thath`1i(x)bh`1i(x)B;8i;x2D;and the`-th hidden layer of NCSFNN is standard deterministic layer as defined in (7). Given pa-rametersfcW`;bb`gof DNN, choose same ones for NCSFNN. Then, the following inequality holds:h`j(x)bh`j(x)BN`1cW`max;8j;x2D:wherecW`max= maxijcW`ij.Proof. See Appendix D.3. Lemma 5 Assume that there exists some positive constant Bsuch thath`1i(x)bh`1i(x)B;8i;x2D;and the`-th hidden layer of simplified-SFNN is stochastic layer. Given parametersfcW`;cW`+1;bb`;bb`+1gof DNN, choose those of Simplified-SFNN as follows:` 1`;`+1;W`+1;b`+1 ``+1s0(0);cW`+1`+1;bb`+1``+1!;where`= maxj;x2DfcW`jh`1(x) +bb`jand`+1is any positive constant. Then, it follows thath`+1k(x)bh`+1k(x)BN`1N`cW`maxcW`+1max+`N`cW`+1max+bb`+1max1`22s0(0)`+1;8k;x2D;wherebb`max= maxjbb`jandcW`max= maxijcW`ij.Proof. See Appendix D.4. Assume that `-th layer is first stochastic hidden layer in Simplified-SFNN. Then, from Theorem 1,we haveh`+1j(x)bh`+1j(x)`N`cW`+1max+bb`+1max1`22s0(0)`+1;8j;x2D: (11)14Under review as a conference paper at ICLR 2017According to Lemma 4 and 5, the final error generated by the right hand side of (11) is bounded by``N`cW`+1max+bb`+1max1`22s0(0)`+1; (12)where`=LQ`0=l+2N`01cW`0max:One can note that every error generated by each stochastic layeris bounded by (12). Therefore, it follows thathLj(x)bhLj(x)X`:stochastic hidden layer0B@``N`cW`+1max+bb`+1max1`22s0(0)`+11CA;8j;x2D:From above inequality, we can conclude thatlim`+1!18stochastic hidden layer `hLj(x)bhLj(x)= 0;8j;x2D:This completes the proof of Theorem 2.D.3 P ROOF OF LEMMA 4From assumption, there exists some constant isuch thatjij<B andh`1i(x) =bh`1i(x) +i;8i;x:By definition of standard deterministic layer, it follows thath`j(x) =f XicW`ijh`1i(x) +bb`1j!=f XicW`ijbh`1i(x) +XicW`iji+bb`j!:Since we assume that jf0(x)j1, one can conclude thath`j(x)f XicW`ijbh`1i(x) +bb`j!XicW`ijiBXicW`ijBN`1cW`max:This completes the proof of Lemma 4.D.4 P ROOF OF LEMMA 5From assumption, there exists some constant `1isuch that`1i<B andh`1i(x) =bh`1i(x) +`1i;8i;x: (13)Let`= maxj;x2DfcW`jh`1(x) +bb`jbe the maximum value of hidden units. If we initialize theparameters`;W`;b` 1`;cW`;bb`, then the marginal distribution becomesPh`j= 1jx;W`;b`= min`fcW`jh`1(x) +bb`j;1=1`fcW`jh`1(x) +bb`j;8j;x:From (13), it follows thatPh`j= 1jx;W`;b`=1`f cW`jbh`1(x) +XicW`ij`1i+bb`j!;8j;x:Similar to Lemma 4, there exists some constant `jsuch that`j<BN`1cW`maxandPh`j= 1jx;W`;b`=1`bh`j(x) +`j;8j;x: (14)15Under review as a conference paper at ICLR 2017Next, consider the upper hidden layer of stochastic layer. From Taylor’s theorem, there exists avaluezbetween 0andtsuch thats(x) =s(0) +s0(0)x+R(x), whereR(x) =s00(z)x22!. Since weconsider a binary random vector, i.e., h`2f0;1gN`, one can writeEP(h`jx)[s(k(h`))] =Xh`s(0) +s0(0)k(h`) +Rk(h`)P(h`jx)=s(0) +s0(0)0@XjW`+1jkP(h`j= 1jx) +b`+1k1A+Xh`R(k(h`))P(h`jx);wherek(h`) =W`+1kh`+b`+1kis the incoming signal. From (14) and above equation, for everyhidden unitk, we haveh`+1k(x;W`+1;b`+1)=f0@`+10@s0(0)0@1`0@XjW`+1jkbh`j(x) +XjW`+1jk`j1A+b`+1k1A+EP(h`jx)R(k(h`))1A1A:Since we assume that jf0(x)j<1, the following inequality holds:h`+1k(x;W`+1;b`+1)f0@`+1s0(0)0@1`XjW`+1ijbh`j(x) +b`+1j1A1A`+1s0(0)`XjW`+1jk`j+`+1EP(h`jx)R(k(h`))`+1s0(0)`XjW`+1jk`j+`+12EP(h`jx)hW`+1kh`+b`+1k2i; (15)where we usejs00(z)j<1for the last inequality. Therefore, it follows thath`+1k(x)bh`+1k(x)BN`1N`cW`maxcW`+1max+`N`cW`+1max+bb`+1max1`22s0(0)`+1;since we set`+1;W`+1;b`+1 `+1`s0(0);cW`+1`+1;1`bb`+1`+1. This completes the proof ofLemma 5.16
ByL7eIWEx
H1acq85gx
ICLR.cc/2017/conference/-/paper317/official/review
{"title": "OK method, but lack of strong evaluation on real-world problems and lack of significant methodological contributions", "rating": "6: Marginally above acceptance threshold", "review": "The authors propose a new approach for estimating maximum entropy distributions\nsubject to expectation constraints. Their approach is based on using\nnormalizing flow networks to non-linearly transform samples from a tractable\ndensity function using invertible transformations. This allows access to the\ndensity of the resulting distribution. The parameters of the normalizing flow\nnetwork are learned by maximizing a stochastic estimate of the entropy\nobtained by sampling and evaluating the log-density on the obtained samples.\nThis stochastic optimization problem includes constraints on expectations with\nrespect to samples from the normalizing flow network. These constraints are\napproximated in practice by sampling and are therefore stochastic. The\noptimization problem is solved by using the augmented Lagrangian method. The\nproposed method is validated on a toy problem with a Dirichlet distribution and\non a financial problem involving the estimation of price changes from option\nprice data.\n\nQuality:\n\nThe paper seems to be technically sound. My only concern would the the approach\nfollowed to apply the augmented Lagrangian method when the objective and the\nconstraints are stochastic. The authors propose their own solution to this\nproblem, based on a hypothesis test, but I think it is likely that this has\nalready been addressed before in the literature. It would be good if the\nauthors could comment on this.\n\nThe experiments performed show that the proposed approach can outperform Gibbs\nsampling from the exact optimal distribution or at least be equivalent, with\nthe advantage of having a closed form solution for the density.\n\nI am concern about the difficulty of he problems considered.\nThe Dirichlet distributions are relatively smooth and the distribution in the\nfinancial problem is one-dimensional (in this case you can use numerical\nmethods to compute the normalization constant and plot the exact density).\nThey seem to be very easy and do not show how the method would perform in more\nchallenging settings: high-dimensions, more complicated non-linear constraints,\netc...\n\nClarity:\n\nThe paper is clearly written and easy to follow.\n\nOriginality:\n\nThe proposed method is not very original since it is based on applying an\nexisting technique (normalizing flow networks) to a specific problem: that of\nfinding a maximum entropy distribution. The methodological contributions are\nalmost non-existing. One could only mention the combination of the normalizing\nflow networks with the augmented Lagrangian method. \n\nSignificance:\n\nThe results seem to be significant in the sense that the authors are able to\nfind densities of maximum entropy distributions, something which did not seem\nto be possible before. However, it is not clearly how useful this can be in\npractice. The problem that they address with real-world data (financial data)\ncould have been solved as well by using 1-dimensional quadrature. The authors\nshould consider more challenging problems which have a clear practical\ninterest.\n\nMinor comments:\n\nMore details should be given about how the plot in the bottom right of Figure 2 has been obtained.\n\n\"a Dirichlet whose KL to the true p\u2217 is small\": what do you mean by this? Can you give more details on how you choose that Dirichlet?\n\nI changed updated my review score after having a look at the last version of the paper submitted by the authors, which includes new experiments.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Maximum Entropy Flow Networks
["Gabriel Loaiza-Ganem *", "Yuanjun Gao *", "John P. Cunningham"]
Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks.
["flexible", "popular framework", "statistical models", "partial knowledge", "traditional", "continuous density", "smooth", "invertible transformation", "simple distribution"]
https://openreview.net/forum?id=H1acq85gx
https://openreview.net/pdf?id=H1acq85gx
https://openreview.net/forum?id=H1acq85gx&noteId=ByL7eIWEx
Published as a conference paper at ICLR 2017MAXIMUM ENTROPY FLOW NETWORKSGabriel Loaiza-Ganem, Yuanjun Gao& John P. CunninghamDepartment of StatisticsColumbia UniversityNew York, NY 10027, USAfgl2480,yg2312,jpc2181 g@columbia.eduABSTRACTMaximum entropy modeling is a flexible and popular framework for formulat-ing statistical models given partial knowledge. In this paper, rather than the tra-ditional method of optimizing over the continuous density directly, we learn asmooth and invertible transformation that maps a simple distribution to the de-sired maximum entropy distribution. Doing so is nontrivial in that the objectivebeing maximized (entropy) is a function of the density itself. By exploiting recentdevelopments in normalizing flow networks, we cast the maximum entropy prob-lem into a finite-dimensional constrained optimization, and solve the problem bycombining stochastic optimization with the augmented Lagrangian method. Sim-ulation results demonstrate the effectiveness of our method, and applications tofinance and computer vision show the flexibility and accuracy of using maximumentropy flow networks.1 I NTRODUCTIONThe maximum entropy (ME) principle (Jaynes, 1957) states that subject to some given prior knowl-edge, typically some given list of moment constraints, the distribution that makes minimal additionalassumptions – and is therefore appropriate for a range of applications from hypothesis testing to priceforecasting to texture synthesis – is that which has the largest entropy of any distribution obeyingthose constraints. First introduced in statistical mechanics by Jaynes (1957), and considered bothcelebrated and controversial, ME has been extensively applied in areas including natural languageprocessing (Berger et al., 1996), ecology (Phillips et al., 2006), finance (Buchen & Kelly, 1996),computer vision (Zhu et al., 1998), and many more.Continuous ME modeling problems typically include certain expectation constraints, and are usuallysolved by introducing Lagrange multipliers, which under typical assumptions yields an exponentialfamily distribution (also called Gibbs distribution) with natural parameters such that the expectationconstraints are obeyed. Unfortunately, fitting ME distributions in even modest dimensions posessignificant challenges. First, optimizing the Lagrangian for a Gibbs distribution requires evaluatingthe normalizing constant, which is in general computationally very costly and error prone. Secondly,in all but the rarest cases, there is no way to draw samples independently and identically from thisGibbs distribution, even if one could derive it. Third, unlike in the discrete case where a number ofrecent and exciting works have addressed the problem of estimating entropy from discrete-valueddata (Jiao et al., 2015; Valiant & Valiant, 2013), estimating differential entropy from data samplesremains inefficient and typically biased. These shortcomings are critical and costly, given the com-mon use of ME distributions for generating reference data samples for a null distribution of a teststatistic. There is thus ample need for a method that can both solve the ME problem and produce asolution that is easy and fast to sample.In this paper we develop maximum entropy flow networks (MEFN), a stochastic-optimization-basedframework and algorithm for fitting continuous maximum entropy models. Two key steps are re-quired. First, conceptually, we replace the idea of maximizing entropy over a density directly withmaximizing, over the parameter space of an indexed function family, the entropy of the densityinduced by mapping a simple distribution (a Gaussian) through that optimized function. ModernThese authors contributed equally.1Published as a conference paper at ICLR 2017neural networks, particularly in variational inference (Kingma & Welling, 2013; Rezende & Mo-hamed, 2015), have successfully employed this same idea to generate complex distributions, andwe look to similar technologies. Secondly, unlike most other objectives in this network literature,the entropy objective itself requires evaluation of the target density directly, which is unavailablein most traditional architectures. We overcome this potential issue by learning a smooth, invertibletransformation that maps a simple distribution to an (approximate) ME distribution. Recent develop-ments in normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016) allow us to avoid biasedand computationally inefficient estimators of differential entropy (such as the nearest-neighbor classof estimators like that of Kozachenko-Leonenko; see Berrett et al. (2016)). Our approach avoidscalculation of normalizing constants by learning a map with an easy-to-compute Jacobian, yieldingtractable probability density computation. The resulting transformation also allows us to reliablygenerate iid samples from the learned ME distribution. We demonstrate MEFN in detail in ex-amples where we can access ground truth, and then we demonstrate further the ability of MEFNnetworks in equity option prices fitting and texture synthesis.Primary contributions of this work include: (i)addressing the substantial need for methods to sampleME distributions; (ii)introducing ME problems, and the value of including entropy in a range ofgenerative modeling problems, to the deep learning community; (iii)the novel use of constrainedoptimization for a deep learning application; and (iv)the application of MEFN to option pricingand texture synthesis, where in the latter we show significant increase in the diversity of synthesizedtextures (over current state of the art) by using MEFN.2 B ACKGROUND2.1 M AXIMUM ENTROPY MODELING AND GIBBS DISTRIBUTIONWe consider a continuous random variable Z2Z Rdwith density p, wherephas differentialentropyH(p) =Rp(z) logp(z)dzand support supp(p). The goal of ME modeling is to find, andthen be able to easily sample from, the maximum entropy distribution given a set of moment andsupport constraints, namely the solution to:p=maximizeH(p) (1)subject toEZp[T(Z)] = 0supp(p) =Z;whereT(z) = (T1(z);:::;Tm(z)) :Z!Rmis the vector of known (assumed sufficient) statistics,andZis the given support of the distribution. Under standard regularity conditions, the optimizationproblem can be solved by Lagrange multipliers, yielding an exponential family pof the form:p(z)/e>T(z)1(z2Z) (2)where2Rmis the choice of natural parameters of psuch thatEp[T(Z)] = 0 . Despite thissimple form, these distributions are only in rare cases tractable from the standpoint of calculating, calculating the normalizing constant of p, and sampling from the resulting distribution. Thereis extensive literature on finding numerically (Darroch & Ratcliff, 1972; Salakhutdinov et al.,2002; Della Pietra et al., 1997; Dudik et al., 2004; Malouf, 2002; Collins et al., 2002), but doing sorequires computing normalizing constants, which poses a challenge even for problems with modestdimensions. Also, even if is correctly found, it is still not trivial to sample from p. Problem-specific sampling methods (such as importance sampling, MCMC, etc.) have to be designed andused, which is in general challenging (burn-in, mixing time, etc.) and computationally burdensome.2.2 N ORMALIZING FLOWSFollowing Rezende & Mohamed (2015), we define a normalizing flow as the transformation ofa probability density through a sequence of invertible mappings. Normalizing flows provide anelegant way of generating a complicated distribution while maintaining tractable density evaluation.Starting with a simple distribution Z02Rdp0(usually taken to be a standard multivariate2Published as a conference paper at ICLR 2017Gaussian), and by applying kinvertible and smooth functions fi:Rd!Rd(i= 1;:::;k ), theresulting variable Zk=fkfk1f1(Z0)has density:pk(zk) =p0(f11f12f1k(zk))kYi=1jdet(Ji(zi1))j1; (3)whereJiis the Jacobian of fi. If the determinant of Jican be easily computed, pkcan be computedefficiently.Rezende & Mohamed (2015) proposed two specific families of transformations for variational in-ference, namely planar flows and radial flows, respectively:fi(z) =z+uih(wTiz+bi) andfi(z) =z+ih(i;ri)(zz0i); (4)wherebi2R,ui;wi2Rdandhis an activation function in the planar case, and where i2R,i>0,z0i2Rd,h(;r) = 1=(+r)andri=jjzz0ijjin the radial. Recently Dinhet al. (2016) proposed a normalizing flow with convolutional, multiscale structure that is suitable forimage modeling and has shown promise in density estimation for natural images.3 M AXIMUM ENTROPY FLOW NETWORK (MEFN) ALGORITHM3.1 F ORMULATIONInstead of solving Equation 2, we propose solving Equation 1 directly by optimizing a trans-formation that maps a random variable Z0, with simple distribution p0, to the ME distribution.Given a parametric family of normalizing flows F=ff;2Rqg, we denote p(z) =p0(f1(z))jdet(J(z))j1as the distribution of the variable f(Z0), whereJis the Jacobianoff. We then rewrite the ME problem as:=maximizeH(p) (5)subject toEZ0p0[T(f(Z0))] = 0supp(p) =Z:Whenp0is continuous andFis suitably general, the program in Equation 5 recovers the ME dis-tributionpexactly. With a flexible transformation family, the ME distribution can be well approx-imated. In experiments we found that taking p0to be a standard multivariate normal distributionachieves good empirical performance. Taking p0to be a bounded distribution (e.g. uniform distri-bution) is problematic for learning transformations near the boundary, and heavy tailed distributions(e.g. Cauchy distribution) caused similar trouble due to large numbers of outliers.3.2 A LGORITHMWe solved Equation 5 using the augmented Lagrangian method. Denote R() =E(T(f(Z0))),the augmented Lagrangian method uses the following objective:L(;;c) =H(p) +>R() +c2jjR()jj2(6)where2Rmis the Lagrange multiplier and c>0is the penalty coefficient. We minimize Equa-tion6for a non-decreasing sequence of cand well-chosen . As a technical note, the augmentedLagrangian method is guaranteed to converge under some regularity conditions (Bertsekas, 2014).As is usual in neural networks, a proof of these conditions is challenging and not yet available,though intuitive arguments (see Appendix xA) suggest that most of them should hold. Due to thenon rigorous nature of these arguments, we rely on the empirical results of the algorithm to claimthat it is indeed solving the optimization problem.For a fixed (;c)pair, we optimize Lwith stochastic gradient descent. Owing to our choice ofnetwork and the resulting ability to efficiently calculate the density p(z(i))for any sample point3Published as a conference paper at ICLR 2017Algorithm 1 Training the MEFN1:initialize=0, setc0>0and0.2:forAugmented Lagrangian iteration k= 1;:::;k maxdo3: forSGD iteration i= 1;:::;i maxdo4: Sample z(1);:::;z(n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;n5: Updateby descending its stochastic gradient (using e.g. ADADELTA (Zeiler, 2012)):rL(;k;ck)1nnXi=1rlogp(z(i)) +1nnXi=1rT(z(i))k+ck2nn2Xi=1rT(z(i))2nnXi=n2+1T(z(i))6: end for7: Sample z(1);:::;z(~n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;~n8: Updatek+1=k+ck1~nP~ni=1T(z(i))9: Updateck+1ck(see text for detail)10:end forz(i)(which are easy-to-sample iid draws from the multivariate normal p0), we compute the unbiasedestimator of H(p)with:H(p)1nnXi=1logp(f(z(i))) (7)R()can also be estimated without bias by taking a sample average of z(i)draws. The resultingoptimization procedure is detailed in Algorithm 1, of which step 9 requires some detail: denotingkas the resulting afterimax SGD iterations at the augmented Lagrangian iteration k, the usualupdate rule for c(Bertsekas, 2014) is:ck+1=ck, ifjjR(k+1)jj>jjR(k)jjck, otherwise(8)where2(0;1)and > 1. Monte Carlo estimation of R()sometimes caused cto be updatedtoo fast, causing numerical issues. Accordingly, we changed the hard update rule for cto a prob-abilistic update rule: a hypothesis test is carried out with null hypothesis H0:E[jjR(k+1)jj] =E[jjR(k)jj]and alternative hypothesis H1:E[jjR(k+1)jj]> E[jjR(k)jj]. Thep-valuepiscomputed, and ck+1is updated to ckwith probability 1p. We used a two-sample t-test to cal-culate thep-value. What results is a robust and novel algorithm for estimating maximum entropydistributions, while preserving the critical properties of being both easy to calculate densities ofparticular points, and being trivially able to produce truly iid samples.4 E XPERIMENTSWe first construct an ME problem with a known solution ( x4.1), and we analyze the MEFN algorithmwith respect to the ground truth and to an approximate Gibbs solution. These examples test thevalidity of our algorithm and illustrate its performance. xB andx4.3 then applies the MEFN to afinancial data application (predicting equity option values) and texture synthesis, respectively, toillustrate the flexibility and practicality of our algorithm.Forx4.1 andxB, We use 10 layers of planar flow with a final transformation g(specified below) thattransforms samples to the specified support, and use with ADADELTA (Zeiler, 2012). For x4.3 weuse real NVP structure and use ADAM (Kingma & Ba, 2014) with learning rate = 0:001. For all ourexperiments, we use imax= 3000 ,= 4,= 0:25. Forx4.1 andxB we usen= 300 ,~n= 1000 ,kmax= 10 ; Forx4.3 we usen= ~n= 2,kmax= 8.4.1 A MAXIMUM ENTROPY PROBLEM WITH KNOWN SOLUTIONFollowing the setup of the typical ME problem, suppose we are given a specified support S=fz=(z1;:::;zd1) :zi0andPd1k=1zk1gand a set of constraints E[logZk] =k(k= 1;:::;d ),4Published as a conference paper at ICLR 2017whereZd= 1Pd1k=1Zk. We then write the maximum entropy program:p=maximizeH(p) (9)subject toEZp[logZkk] = 08k= 1;:::;dsupp(p) =S:This is a general ME problem that can be solved via the MEFN. Of course, we have particularlychosen this example because, though it may not obviously appear so, the solution has a standard andtractable form, namely the Dirichlet. This choice allows us to consider a complicated optimizationprogram that happens to have known global optimum, providing a solid test bed for the MEFN (andfor the Gibbs approach against which we will compare). Specifically, given a parameter 2Rd,the Dirichlet has density:p(z1;:::;zd1) =1B()dYk=1zk1k1((z1;:::;zd1)2S) (10)whereB()is the multivariate Beta function, and zd= 1Pd1k=1zk. Note that this Dirichletis a distribution on Sand not on the (d1)-dimensional simplex Sd1=f(z1;:::;zd) :zk0andPdk=1zk= 1g(an often ignored and seemingly unimportant technicality that needs to becorrect here to ensure the proper transformation of measure). Connecting this familiar distribution tothe ME problem above, we simply have to choose such thatk= (k) (0)fork= 1;:::;d ,where0=Pdk=1kand is the digamma function. We then can pose the above ME problemto the MEFN and compare performance against ground truth. Before doing so, we must stipulatethe transformation gthat maps the Euclidean space of the multivariate normal p0to the desiredsupportS. Any sensible choice will work well (another point of flexibility for the MEFN); we usethe standard transformation:g(z1;:::;zd1) = ez1Pd1k=1ezk+ 1;:::;ezd1Pd1k=1ezk+ 1!>(11)Note that the MEFN outputs vectors in Rd1, and not Rd, because the Dirichlet is specified as adistribution onS(and not on the simplex Sd1). Accordingly, the Jacobian is a square matrix andits determinant can be computed efficiently using the matrix determinant lemma. Here, p0is set tothe(d1)-dimensional standard normal.We proceed as follows: We choose and compute the constraints 1;:::;d. We run MEFN pre-tending we do not know or the Dirichlet form. We then take a random sample from the fitteddistribution and a random sample from the Dirichlet with parameter , and compare the two sam-ples using the maximum mean discrepancy (MMD) kernel two sample test (Gretton et al., 2012),which assesses the fit quality. We take the sample size to be 300for the two sample kernel test.Figure 1 shows an example of the transformation from normal (left panel) to MEFN (middle panel),and comparing that to the ground truth Dirichlet (right panel). The MEFN and ground truth Dirichletdensities shown in purple match closely, and the samples drawn (red) indeed appear to be iid drawsfrom the same (maximum entropy) distribution in both cases.Additionally, the middle panel of Figure 1 shows an important cautionary tale that foreshadows ourtexture synthesis results ( x4.3). One might suppose that satisfying the moment matching constraintsis adequate to produce a distribution which, if not technically the ME distribution, is still inter-estingly variable. The middle panel shows the failure of this intuition: in dark green, we show anetwork trained to simply match the moments specified above, and the resulting distribution quitepoorly expresses the variability available to a distribution with these constraints, leading to samplesthat are needlessly similar. Given the substantial interest in using networks to learn implicit genera-tive models (e.g., Mohamed & Lakshminarayanan (2016)), this concern is particularly relevant andhighlights the importance of considering entropy.Figure 2 quantitatively analyzes these results. In the left panel, for a specific choice of = (1;2;3),we show our unbiased entropy estimate of the MEFN distribution pas a function of the numberof SGD iterations (red), along with the ground truth maximum entropy H(p)(green line). Note5Published as a conference paper at ICLR 2017Initial distribution p0 MEFN result p Ground truth pp0/uni00000037/uni00000055/uni00000058/uni00000048Figure 1: Example results from the ME problem with known Dirichlet ground truth. Left panel :The normal density p0(purple) and iid samples from p0(red points). Middle panel : The MEFNtransformsp0to the desired maximum entropy distribution pon the simplex (calculated densitypin purple). Truly iid samples are easily drawn from p(red points) by drawing from p0andmapping those points through f. Shown in the middle panel are the same points in the top leftpanel mapped through f. Samples corresponding to training the same network as MEFN to simplymatch the specified moments (ignoring entropy) are also shown (dark green points; see text). Rightpanel : The ground truth (in this example, known to be Dirichlet) distribution in purple, and iidsamples from it in red./uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000016/uni00000013/uni00000013/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000014/uni00000011/uni00000019/uni00000014/uni00000011/uni00000017/uni00000014/uni00000011/uni00000015/uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni00000019/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000028/uni00000056/uni00000057/uni0000004c/uni00000050/uni00000044/uni00000057/uni00000048/uni00000047/uni00000037/uni00000055/uni00000058/uni00000048/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000014 /uni00000013/uni00000011/uni00000013/uni00000015 /uni00000013/uni00000011/uni00000013/uni00000016MMD2u/uni00000013/uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013p(MMD2u)/uni00000030/uni00000028/uni00000029/uni00000031/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000013/uni00000013/uni0000001b/uni0000001b/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000014/uni00000013/uni00000031/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000047/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000016 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000018MMD2u/uni00000003p/uni00000010/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni0000002e/uni0000002f/uni00000030/uni00000028/uni00000029/uni00000031/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni00000056Figure 2: Quantitative analysis of simulation results. See text for description.that the MEFN stabilizes at the correct value (as a stochastic estimator, variance around that valueis expected). In the middle panel, we show the distribution of MMD values for the kernel twosample test, as well as the observed statistic for the MEFN (red) and for a randomly chosen Dirichletdistribution (gray; chosen to be close to the true optimum, making a conservative comparison). TheMMD test does not reject MEFN as being different from the true ME distribution p, but it doesreject a Dirichlet whose KL to the truepis small (see legend). In the right panel, for manydifferent Dirichlets in a small grid around a single true p, the kernel two sample test statistic iscomputed, the MMD p-value is calculated, as is the KL to the true distribution. We plot a scatterof these points in grey, and we plot the particular MEFN solution as a red star. We see that forother Dirichlets with similar KL to the true distribution as the MEFN distribution, the p-valuesseem uniform, meaning that the KLto the true is indeed very small. Again this is conservative, asthe grey points have access to the known Dirichlet form, whereas the MEFN considered the entirespace (within its network capacity) of Ssupported distributions. Given this fact, the performance ofMEFN is impressive.4.2 R ISK-NEUTRAL ASSET PRICINGWe illustrate the flexibility and practicality of our algorithm extracting the risk-neutral asset priceprobability based on option prices, an active and interesting area for ME models. We find that MEFNand the classic Gibbs approach yield comparable performances. Owing to space limitations we haveplaced these results in Appendix xB.4.3 M ODELING IMAGES OF TEXTURESConstructing generative models to generate random images with certain texture structure is an im-portant task in computer vision. A line of texture synthesis research proceeds by first extracting a set6Published as a conference paper at ICLR 2017of features that characterizes the target texture and then generate images that match the features. Theseminal work of Zhu et al. (1998) proposes constructing texture models under the ME framework,where features (or filters) of the given texture image are adaptively added in the model and a Gibbsdistribution whose expected feature matches the target texture is learnt. One major difficulty withthe method is that both model learning and image generation involve sampling from a complicatedGibbs distribution. More recent works exploit more complicated features (Portilla & Simoncelli,2000; Gatys et al., 2015; Ulyanov et al., 2016). Ulyanov et al. (2016) propose the texture net , whichuses a texture loss function by using the Gram matrices of the outputs of some convolutional layersof a pre-trained deep neural network for object recognition.While the use of these complicated features does provide high-quality synthetic texture images, thatwork focuses exclusively on generating images that match these feature (moments). Importantly,this network focuses only on generating feature-matching images without using the ME frameworkto promote the diversity of the samples. Doing so can be deeply problematic: in Figure 1 (middlepanel), we showed the lack of diversity resulting from only moment matching in that Dirichlet set-ting, and further we note that the extreme pathology would result in a point mass on the trainingimage – a global optimum for this objective, but obviously a terrible generative model for synthe-sizing textures. Ideally, the MEFN will match the moments andpromote sample diversity.We applied MEFN to texture synthesis with an RGB representation of the 224224pixel images,z2 Z = [0;1]d, whered= 2242243. We follow Ulyanov et al. (2016) (we adaptedhttps://github.com/ProofByConstruction/texture-networks ) to create a tex-ture loss measure T(z) : [0;1]d!R, and aim to sample a diverse set of images with small momentviolation. For the transformation family Fwe use the real NVP network structure proposed in Dinhet al. (2016) (we adapted https://github.com/taesung89/real-nvp ). We use 3resid-ual blocks with 32feature maps for each coupling layer and downscale 3times. For fair comparison,we use the same real NVP structure for both1, implemented in TensorFlow (Abadi et al., 2016).As is shown in top row of figure 3, both methods generate visually pleasing images capturing thetexture structure well. The bottom row of Figure 3 shows that texture cost (left panel) is similarfor both methods, while MEFN generates figures with much larger entropy than the texture networkformulation (middle panel), which is desirable (as previously discussed). The bottom right panelof figure 3 compares the marginal distribution of the RGB values sampled from the networks: wefound that MEFN generates a more variable distribution of RGB values than the texture net. Furtherresults are in Appendix xC.Input Texture net (Ulyanov et al., 2016) MEFN (ours)Texture cost Entropy RGB histogram05000 10000 15000 20000 25000Iteration1061071081091010Texture costTexture netsMEFN05000 10000 15000 20000 25000Iteration104105106Negative Entropy0.0 0.2 0.4 0.6 0.8 1.0RGB value0.00.51.01.52.02.5DensityFigure 3: Analysis of texture synthesis experiment. See text for description.1Ulyanov et al. (2016) use a quite different generative network structure, which is not invertible and istherefore infeasible for entropy evaluation, so we replace their generative network by the real NVP structure.7Published as a conference paper at ICLR 2017We compute in Table 1 the average pairwise Euclidean distance between randomly sampled images(dL2=meani6=jkzizjk22), and MEFN gives higher dL2, quantifying diversity across images. Wealso consider an ANOV A-style analysis to measure the diversity of the images, where we think ofthe RGB values for the same pixel across multiple images as a group, and compute the within andbetween group variance. Specifically, denoting zkias the pixel value for a specific pixel k= 1;:::;dfor an image i= 1;::::;n . We partition the total sum of square SST =Pi;k(zkiz)2as the withingroup error SSW =Pi;k(zkizk)2and between group error SSB =Pi;kn(zkz)2, wherezandzkare the mean pixel values across all data and for a specific pixel k. Ideally we want thesamples to exhibit large variability across images (large SSW, within a group/pixel) and no structurein the mean image (small SSB, across groups/pixels). Indeed, the MEFN has a larger SSW, implyinghigher variability around the mean image, a smaller SSB, implying the stationarity of the generatedsamples, and a larger SST, implying larger total variability also. The MEFN produces images thatare conclusively more variable without sacrificing the quality of the texture, implicating the broadutility of ME.Table 1: Quantitative measure of image diversity using 20randomly sampled imagesMethod dL2 SST SSW SSBTexture net 11534 128680 109577 19103MEFN 17014 175604 161639 139645 C ONCLUSIONIn this paper we propose a general framework for fitting ME models. This approach is novel andhas three key features. First, by learning a transformation of a simple distribution rather than thedistribution itself, we are able to avoid explicitly computing an intractable normalizing constant forthe ME distribution. Second, by combining stochastic optimization with the augmented Lagrangianmethod, we can fit the model efficiently, allowing us to evaluate the ME density of any point simplyand accurately. Third, critically, this construction allows us to trivially sample iid from a ME dis-tribution, extending the utility and efficiency of the ME framework more generally. Also, accuracyequivalent to the classic Gibbs approach is in itself a contribution (owing to these other features).We illustrate the MEFN in both a simulated case with known ground truth and real data examples.There are a few recent works encouraging sample diversity in the setting of texture model-ing. Ulyanov et al. (2017) extended Ulyanov et al. (2016) by adding a penalty term using theKozachenko-Leonenko estimator Kozachenko & Leonenko (1987) of entropy. Their generative net-work is an arbitrary deep neural network rather than a normalizing flow, which is more flexible butcannot give the probability density of each sample easily so as to compute an unbiased estimatorof the entropy. Kozachenko-Leonenko is a biased estimator for entropy and requires a fairly largenumber of samples to get good performance in high-dimensional settings, hindering the scalabilityand accuracy of the method; indeed, our choice of normalizing flow networks was driven by thesepractical issues with Kozachenko-Leonenko. Lu et al. (2016) extended Zhu et al. (1998) by usinga more flexible set of filters derived from a pre-trained deep neural networks, and using parallelMCMC chains to learn and sample from the Gibbs distribution. Running parallel MCMC chains re-sults in diverse samples but can be computationally intensive for generating each new sample image.Our MEFN framework enables truly iid sampling with the ease of a feed forward network.ACKNOWLEDGMENTSWe thank Evan Archer for normalizing flow code, and Xuexin Wei, Christian Andersson Naessethand Scott Linderman for helpful discussion. This work was supported by a Sloan Fellowship and aMcKnight Fellowship (JPC).8Published as a conference paper at ICLR 2017REFERENCESMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. A maximum entropy approachto natural language processing. Computational linguistics , 22(1):39–71, 1996.Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimationviak-nearest neighbour distances. arXiv preprint arXiv:1606.00304 , 2016.Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods . Academic press,2014.Oleg Bondarenko. Estimation of risk-neutral densities using positive convolution approximation.Journal of Econometrics , 116(1):85–112, 2003.Jonathan Borwein, Rustum Choksi, and Pierre Mar ́echal. Probability distributions of assets inferredfrom option prices via the principle of maximum entropy. SIAM Journal on Optimization , 14(2):464–478, 2003.Peter W Buchen and Michael Kelly. The maximum entropy distribution of an asset inferred fromoption prices. Journal of Financial and Quantitative Analysis , 31(01):143–159, 1996.Anna Choromanska, Mikael Henaff, Michael Mathieu, G ́erard Ben Arous, and Yann LeCun. Theloss surfaces of multilayer networks. In AISTATS , 2015.Michael Collins, Robert E Schapire, and Yoram Singer. Logistic regression, adaboost and bregmandistances. Machine Learning , 48(1-3):253–285, 2002.John N Darroch and Douglas Ratcliff. Generalized iterative scaling for log-linear models. Theannals of mathematical statistics , pp. 1470–1480, 1972.Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. Inducing features of random fields.IEEE transactions on pattern analysis and machine intelligence , 19(4):380–393, 1997.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXivpreprint arXiv:1605.08803 , 2016.Miroslav Dudik, Steven J Phillips, and Robert E Schapire. Performance guarantees for regularizedmaximum entropy density estimation. In International Conference on Computational LearningTheory , pp. 472–486. Springer, 2004.Stephen Figlewski. Estimating the implied risk neutral density. 2008.Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neuralnetworks. In Advances in Neural Information Processing Systems , pp. 262–270, 2015.Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch ̈olkopf, and Alexander Smola.A kernel two-sample test. Journal of Machine Learning Research , 13(Mar):723–773, 2012.Edwin T Jaynes. Information theory and statistical mechanics. Physical review , 106(4):620, 1957.Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of functionalsof discrete distributions. IEEE Transactions on Information Theory , 61(5):2835–2885, 2015.Kenji Kawaguchi. Deep learning without poor local minima. In Advances In Neural InformationProcessing Systems , pp. 586–594, 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.9Published as a conference paper at ICLR 2017LF Kozachenko and Nikolai N Leonenko. Sample estimate of the entropy of a random vector.Problemy Peredachi Informatsii , 23(2):9–16, 1987.Yang Lu, Song-chun Zhu, and Ying Nian Wu. Learning frame models using cnn filters. In ThirtiethAAAI Conference on Artificial Intelligence , 2016.Robert Malouf. A comparison of algorithms for maximum entropy parameter estimation. In pro-ceedings of the 6th conference on Natural language learning-Volume 20 , pp. 1–7. Association forComputational Linguistics, 2002.Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXivpreprint arXiv:1610.03483 , 2016.Steven J Phillips, Robert P Anderson, and Robert E Schapire. Maximum entropy modeling ofspecies geographic distributions. Ecological modelling , 190(3):231–259, 2006.Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Ex-ponential expressivity in deep neural networks through transient chaos. In Advances In NeuralInformation Processing Systems , pp. 3360–3368, 2016.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the ex-pressive power of deep neural networks. arXiv preprint arXiv:1606.05336 , 2016.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. On the convergence of bound op-timization algorithms. In Proceedings of the Nineteenth conference on Uncertainty in ArtificialIntelligence , pp. 509–516. Morgan Kaufmann Publishers Inc., 2002.Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417 , 2016.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maxi-mizing quality and diversity in feed-forward stylization and texture synthesis. arXiv preprintarXiv:1701.02096 , 2017.Paul Valiant and Gregory Valiant. Estimating the unseen: improved estimators for entropy and otherproperties. In Advances in Neural Information Processing Systems , pp. 2157–2165, 2013.Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 ,2012.Song Chun Zhu, Yingnian Wu, and David Mumford. Filters, random fields and maximum entropy(frame): Towards a unified theory for texture modeling. International Journal of Computer Vision ,27(2):107–126, 1998.10Published as a conference paper at ICLR 2017A A UGMENTED LAGRANGIAN CONDITIONSWe give a more thorough discussion of the regularity conditions which ensure that the AugmentedLagrangian method will work. The goal of this section is simply to state these conditions and giveintuitive arguments about why some should hold in our case, not to attempt to prove that they indeedhold. The conditions (Bertsekas, 2014) are:There exists a strict local minimum of the optimization problem of Equation 5:If the function class Fis rich enough that it contains a true solver of the maximum entropyproblem, then a global optimum exists. Although not rigorous, we would expect that evenin the finite expressivity case that a global optimum remains, and indeed, recent theoreticalwork (Raghu et al., 2016; Poole et al., 2016) has gotten close to proving this.is a regular point of the optimization problem, that is, the rows of rR()are linearlyindependent:Again, this is not formal, but we should not expect this to cause any issues. This clearlydepends on the specific form of T, but the condition basically says that there should not beredundant constraints at the optimum, so if Tis reasonable this shouldn’t happen.H(p)andR()are twice continuously differentiable on a neighborhood around :This holds by the smoothness of the normalizing flows.y>r2L(;;0)y >0for everyy6= 0 such thatrR()y= 0, whereis the trueLagrange multiplier:This condition is harder to justify. It would appear it is just asking that the Lagrangian(not the augmented Lagrangian) be strictly convex in feasible directions, but it is actuallystronger than this and some simple functions might not satisfy the property. For example,if the function we are optimizing was x4and we had no constraints, the Lagrangian’sHessian would be 12x2, which is 0atx= 0thus not satisfying the condition. Importantly,these conditions are sufficient but not necessary, so even if this doesn’t hold the augmentedLagrangian method might work (it certainly would for x4). Because of this and the non-rigorous justifications of the first two conditions, we left these conditions for the appendixand relied instead on the empirical performance to justify that we are indeed recovering themaximum entropy distribution.If all of these conditions hold, the augmented Lagrangian (for large enough candclose enough to) has a unique optimum in a neighborhood around that is close to (as!it converges to) and its hessian at this optimum is positive-definite. Furthermore, k!. This implies that gra-dient descent (with the usual caveats of being started close enough to the solution and with the rightsteps) will correctly recover using the augmented Lagrangian method. This of course just guar-antees convergence to a local optimum, but if there are no additional assumptions such as convexity,it can be very hard to ensure that it is indeed a global optimum. Some recent research has attemptedto explain why optimization algorithms perform so well for neural networks (Choromanska et al.,2015; Kawaguchi, 2016), but we leave such attempts for our case for future research.B R ISK-NEUTRAL ASSET PRICEWe extract the risk-neutral asset price probability distribution based on option prices, an active andinteresting area for ME models. We give a brief introduction of the problem and refer interestedreaders to see Buchen & Kelly (1996) for a more detailed explanation. Denoting Stas the priceof an asset at time t, the buyer of a European call option for the stock that expires at time tewithstrike priceKwill receive a payoff of cK= (SteK)+= max(SteK;0)at timete. Underthe efficient market assumption, the risk-neutral probability distribution for the stock price at timetesatisfies:cK=D(te)Eq[(SteK)+]; (12)whereD(te)is the risk-free discount factor and qis the risk-neutral measure. We also have that,under the risk-neutral measure, the current stock price S0is the discounted expected value of Ste:S0=D(te)Eq(Ste): (13)11Published as a conference paper at ICLR 2017When given moptions that expire at time tewith strikesK1;:::;Kmand pricescK1;:::;cKm, we getmexpectation constraints on q(Ste)from Equation 12, together with Equation 13, we have m+ 1expectation constraints in total. With that partial knowledge we can approximate q(Ste), which ishelpful for understanding the market expected volatility and identify mispricing in option markets,etc.Inferring the risk-neutral density of asset price from a finite number of option prices is an importantquestion in finance and has been studied extensively (Buchen & Kelly, 1996; Borwein et al., 2003;Bondarenko, 2003; Figlewski, 2008). One popular method proposed by Buchen & Kelly (1996)estimates the probability density as the maximum entropy distribution satisfying the expectationconstraints and a positivity support constraint by fitting a Gibbs distribution, which results in apiece-wise linear log density:p(z)/exp(0z+mXi=1i(zKi)+)1(z0) (14)and optimize the distribution with numerical methods. Here we compare the performance of theMEFN algorithm with the method proposed in Buchen & Kelly (1996). To enforce the positivityconstraint we choose g(z) =eaz+b, whereaandbare additional parameters.We collect the closing price of European call options on Nov. 1 2016 for the stock AAPL (Appleinc.) that expires on te=Jun. 16 2017. We use m= 4of the options with highest trading volume astraining data and the rest as testing data. On the left panel of figure 4, we show the fitted risk-neutraldensity ofSteby MEFN (red line) with that of the fitted Gibbs distribution result (blue line). Wefind that while the distributions share similar location and variability, the distribution inferred byMEFN is smoother and arguably more plausible. In the middle panel we show a Q-Q plot of thequantiles of the MEFN and Gibbs distributions. We can see that the quantile pairs match the identityclosely, which should happen if both methods recovered the exact same distribution. This highlightsthe effectiveness of MEFN. There does exist a small mismatch in the tails: the distribution inferredby MEFN has slightly heavier tails. This mismatch is difficult to interpret: given that both the Gibbsand MEFN distributions are fit with option price data (and given that one can observe at most onevalue from the distribution, namely the stock price at expiration), it is fundamentally unclear whichdistribution is superior, in the sense of better capturing the true ME distribution’s tails. On the rightpanel we show the fitted option price for the two fitted distributions (for each strike price, we canrecover the fitted option price by Equation 12). We noted that the fitted option price and strike pricelines for both methods are very similar (they are mostly indiscernible on the right panel of figure4). We also compare the fitted performance on the test data by computing the root mean squareerror for the fitted and test data. We observe that the predictive performances for both methods arecomparable.0 50 100 150 200Price (dollars)0.0000.0050.0100.0150.0200.0250.0300.035DensityGibbsMEFN0 50 100 150 200 250Gibbs Quantiles050100150200250300MEFN Quantilesidentity0 50 100 150Strike price (dollars)20020406080100120Option price (dollars)Gibbs, RMSE=2.43MEFN, RMSE=2.39Training dataTesting dataFigure 4: Constructing risk-neutral measure from observed option price. Left panel : fitted risk-neutral measure by Gibbs and MEFN method. Middle panel : Q-Q plot for the quantiles from thedistributions on the left panel. Right panel : observed and fitted option price for different strikes.We note that for this specific application, there are practical concerns such as the microstructurenoise in the data and inefficiency in the market, etc. Applying a pre-processing procedure and incor-porating prior assumptions can be helpful for getting a more full-fledged method (see e.g. Figlewski(2008)). Here we mainly focus on illustrating the ability of the MEFN method to approximate theME distribution for non-typical distributions. Future work for this application includes fitting a risk-neutral distribution for multi-dimensional assets by incorporating dependence structure on assets.12Published as a conference paper at ICLR 2017C M ODELING IMAGES OF TEXTURESWe tried our texture modeling approach with many different textures, and although MEFN samplesdon’t always exhibit more visual diversity than samples obtained from the texture network, theyalways have more entropy as in figure 3. Figure 5 shows two positive examples, i.e. textures inwhich samples from MEFN do exhibit higher visual diversity than those from the texture network, aswell as a negative example, in which MEFN achieves less visual diversity than the texture network,regardless of the fact that MEFN samples do have larger entropy. We hypothesize that this curiousbehavior is due to the optimization achieving a local optimum in which the brick boundaries anddark brick locations are not diverse but the entropy within each brick is large. It should also benoted that among the experiments that we ran, this was the only negative example that we got, andthat slightly modifying the hyperparameters caused the issue to disappear.Input(positive example)Input(positive example)Input(negative example)Texture net (Ulyanov et al. (2016), less sample diversity)MEFN (ours, more sample diversity)Figure 5: MEFN and texture network samples.13
ryrU9ztre
H1acq85gx
ICLR.cc/2017/conference/-/paper317/official/review
{"title": "Application of normalizing flows to max-ent", "rating": "6: Marginally above acceptance threshold", "review": "This paper applies the idea of normalizing flows (NFs), which allows us to build complex densities with tractable likelihoods, to maximum entropy constrained optimization.\n\nThe paper is clearly written and is easy to follow.\n\nNovelty is a weak factor in this paper. The main contributions come from (1) applying previous work on NFs to the problem of MaxEnt estimation and (2) addressing some of the optimization issues resulting from stochastic approximations to E[||T||] in combination with the annealing of Lagrange multipliers.\nApplying the NFs to MaxEnt is in itself not very novel as a framework. For instance, one could obtain a loss equivalent to the main loss in eq. (6) by minimizing the KLD between KL[p_{\\phi};f], where f is the unormalized likelihood f \\propto exp \\sum_k( - \\lambda_k T - c_k ||T_k||^2 ). This type of derivation is typical in all previous works using NFs for variational inference.\nA few experiments on more complex data would strengthen the paper's results. The two experiments provided show good results but both of them are toy problems.\n\n\nMinor point:\n\nAlthough intuitive, it would be good to have a short discussion of step 8 of algorithm 1 as well.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Maximum Entropy Flow Networks
["Gabriel Loaiza-Ganem *", "Yuanjun Gao *", "John P. Cunningham"]
Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks.
["flexible", "popular framework", "statistical models", "partial knowledge", "traditional", "continuous density", "smooth", "invertible transformation", "simple distribution"]
https://openreview.net/forum?id=H1acq85gx
https://openreview.net/pdf?id=H1acq85gx
https://openreview.net/forum?id=H1acq85gx&noteId=ryrU9ztre
Published as a conference paper at ICLR 2017MAXIMUM ENTROPY FLOW NETWORKSGabriel Loaiza-Ganem, Yuanjun Gao& John P. CunninghamDepartment of StatisticsColumbia UniversityNew York, NY 10027, USAfgl2480,yg2312,jpc2181 g@columbia.eduABSTRACTMaximum entropy modeling is a flexible and popular framework for formulat-ing statistical models given partial knowledge. In this paper, rather than the tra-ditional method of optimizing over the continuous density directly, we learn asmooth and invertible transformation that maps a simple distribution to the de-sired maximum entropy distribution. Doing so is nontrivial in that the objectivebeing maximized (entropy) is a function of the density itself. By exploiting recentdevelopments in normalizing flow networks, we cast the maximum entropy prob-lem into a finite-dimensional constrained optimization, and solve the problem bycombining stochastic optimization with the augmented Lagrangian method. Sim-ulation results demonstrate the effectiveness of our method, and applications tofinance and computer vision show the flexibility and accuracy of using maximumentropy flow networks.1 I NTRODUCTIONThe maximum entropy (ME) principle (Jaynes, 1957) states that subject to some given prior knowl-edge, typically some given list of moment constraints, the distribution that makes minimal additionalassumptions – and is therefore appropriate for a range of applications from hypothesis testing to priceforecasting to texture synthesis – is that which has the largest entropy of any distribution obeyingthose constraints. First introduced in statistical mechanics by Jaynes (1957), and considered bothcelebrated and controversial, ME has been extensively applied in areas including natural languageprocessing (Berger et al., 1996), ecology (Phillips et al., 2006), finance (Buchen & Kelly, 1996),computer vision (Zhu et al., 1998), and many more.Continuous ME modeling problems typically include certain expectation constraints, and are usuallysolved by introducing Lagrange multipliers, which under typical assumptions yields an exponentialfamily distribution (also called Gibbs distribution) with natural parameters such that the expectationconstraints are obeyed. Unfortunately, fitting ME distributions in even modest dimensions posessignificant challenges. First, optimizing the Lagrangian for a Gibbs distribution requires evaluatingthe normalizing constant, which is in general computationally very costly and error prone. Secondly,in all but the rarest cases, there is no way to draw samples independently and identically from thisGibbs distribution, even if one could derive it. Third, unlike in the discrete case where a number ofrecent and exciting works have addressed the problem of estimating entropy from discrete-valueddata (Jiao et al., 2015; Valiant & Valiant, 2013), estimating differential entropy from data samplesremains inefficient and typically biased. These shortcomings are critical and costly, given the com-mon use of ME distributions for generating reference data samples for a null distribution of a teststatistic. There is thus ample need for a method that can both solve the ME problem and produce asolution that is easy and fast to sample.In this paper we develop maximum entropy flow networks (MEFN), a stochastic-optimization-basedframework and algorithm for fitting continuous maximum entropy models. Two key steps are re-quired. First, conceptually, we replace the idea of maximizing entropy over a density directly withmaximizing, over the parameter space of an indexed function family, the entropy of the densityinduced by mapping a simple distribution (a Gaussian) through that optimized function. ModernThese authors contributed equally.1Published as a conference paper at ICLR 2017neural networks, particularly in variational inference (Kingma & Welling, 2013; Rezende & Mo-hamed, 2015), have successfully employed this same idea to generate complex distributions, andwe look to similar technologies. Secondly, unlike most other objectives in this network literature,the entropy objective itself requires evaluation of the target density directly, which is unavailablein most traditional architectures. We overcome this potential issue by learning a smooth, invertibletransformation that maps a simple distribution to an (approximate) ME distribution. Recent develop-ments in normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016) allow us to avoid biasedand computationally inefficient estimators of differential entropy (such as the nearest-neighbor classof estimators like that of Kozachenko-Leonenko; see Berrett et al. (2016)). Our approach avoidscalculation of normalizing constants by learning a map with an easy-to-compute Jacobian, yieldingtractable probability density computation. The resulting transformation also allows us to reliablygenerate iid samples from the learned ME distribution. We demonstrate MEFN in detail in ex-amples where we can access ground truth, and then we demonstrate further the ability of MEFNnetworks in equity option prices fitting and texture synthesis.Primary contributions of this work include: (i)addressing the substantial need for methods to sampleME distributions; (ii)introducing ME problems, and the value of including entropy in a range ofgenerative modeling problems, to the deep learning community; (iii)the novel use of constrainedoptimization for a deep learning application; and (iv)the application of MEFN to option pricingand texture synthesis, where in the latter we show significant increase in the diversity of synthesizedtextures (over current state of the art) by using MEFN.2 B ACKGROUND2.1 M AXIMUM ENTROPY MODELING AND GIBBS DISTRIBUTIONWe consider a continuous random variable Z2Z Rdwith density p, wherephas differentialentropyH(p) =Rp(z) logp(z)dzand support supp(p). The goal of ME modeling is to find, andthen be able to easily sample from, the maximum entropy distribution given a set of moment andsupport constraints, namely the solution to:p=maximizeH(p) (1)subject toEZp[T(Z)] = 0supp(p) =Z;whereT(z) = (T1(z);:::;Tm(z)) :Z!Rmis the vector of known (assumed sufficient) statistics,andZis the given support of the distribution. Under standard regularity conditions, the optimizationproblem can be solved by Lagrange multipliers, yielding an exponential family pof the form:p(z)/e>T(z)1(z2Z) (2)where2Rmis the choice of natural parameters of psuch thatEp[T(Z)] = 0 . Despite thissimple form, these distributions are only in rare cases tractable from the standpoint of calculating, calculating the normalizing constant of p, and sampling from the resulting distribution. Thereis extensive literature on finding numerically (Darroch & Ratcliff, 1972; Salakhutdinov et al.,2002; Della Pietra et al., 1997; Dudik et al., 2004; Malouf, 2002; Collins et al., 2002), but doing sorequires computing normalizing constants, which poses a challenge even for problems with modestdimensions. Also, even if is correctly found, it is still not trivial to sample from p. Problem-specific sampling methods (such as importance sampling, MCMC, etc.) have to be designed andused, which is in general challenging (burn-in, mixing time, etc.) and computationally burdensome.2.2 N ORMALIZING FLOWSFollowing Rezende & Mohamed (2015), we define a normalizing flow as the transformation ofa probability density through a sequence of invertible mappings. Normalizing flows provide anelegant way of generating a complicated distribution while maintaining tractable density evaluation.Starting with a simple distribution Z02Rdp0(usually taken to be a standard multivariate2Published as a conference paper at ICLR 2017Gaussian), and by applying kinvertible and smooth functions fi:Rd!Rd(i= 1;:::;k ), theresulting variable Zk=fkfk1f1(Z0)has density:pk(zk) =p0(f11f12f1k(zk))kYi=1jdet(Ji(zi1))j1; (3)whereJiis the Jacobian of fi. If the determinant of Jican be easily computed, pkcan be computedefficiently.Rezende & Mohamed (2015) proposed two specific families of transformations for variational in-ference, namely planar flows and radial flows, respectively:fi(z) =z+uih(wTiz+bi) andfi(z) =z+ih(i;ri)(zz0i); (4)wherebi2R,ui;wi2Rdandhis an activation function in the planar case, and where i2R,i>0,z0i2Rd,h(;r) = 1=(+r)andri=jjzz0ijjin the radial. Recently Dinhet al. (2016) proposed a normalizing flow with convolutional, multiscale structure that is suitable forimage modeling and has shown promise in density estimation for natural images.3 M AXIMUM ENTROPY FLOW NETWORK (MEFN) ALGORITHM3.1 F ORMULATIONInstead of solving Equation 2, we propose solving Equation 1 directly by optimizing a trans-formation that maps a random variable Z0, with simple distribution p0, to the ME distribution.Given a parametric family of normalizing flows F=ff;2Rqg, we denote p(z) =p0(f1(z))jdet(J(z))j1as the distribution of the variable f(Z0), whereJis the Jacobianoff. We then rewrite the ME problem as:=maximizeH(p) (5)subject toEZ0p0[T(f(Z0))] = 0supp(p) =Z:Whenp0is continuous andFis suitably general, the program in Equation 5 recovers the ME dis-tributionpexactly. With a flexible transformation family, the ME distribution can be well approx-imated. In experiments we found that taking p0to be a standard multivariate normal distributionachieves good empirical performance. Taking p0to be a bounded distribution (e.g. uniform distri-bution) is problematic for learning transformations near the boundary, and heavy tailed distributions(e.g. Cauchy distribution) caused similar trouble due to large numbers of outliers.3.2 A LGORITHMWe solved Equation 5 using the augmented Lagrangian method. Denote R() =E(T(f(Z0))),the augmented Lagrangian method uses the following objective:L(;;c) =H(p) +>R() +c2jjR()jj2(6)where2Rmis the Lagrange multiplier and c>0is the penalty coefficient. We minimize Equa-tion6for a non-decreasing sequence of cand well-chosen . As a technical note, the augmentedLagrangian method is guaranteed to converge under some regularity conditions (Bertsekas, 2014).As is usual in neural networks, a proof of these conditions is challenging and not yet available,though intuitive arguments (see Appendix xA) suggest that most of them should hold. Due to thenon rigorous nature of these arguments, we rely on the empirical results of the algorithm to claimthat it is indeed solving the optimization problem.For a fixed (;c)pair, we optimize Lwith stochastic gradient descent. Owing to our choice ofnetwork and the resulting ability to efficiently calculate the density p(z(i))for any sample point3Published as a conference paper at ICLR 2017Algorithm 1 Training the MEFN1:initialize=0, setc0>0and0.2:forAugmented Lagrangian iteration k= 1;:::;k maxdo3: forSGD iteration i= 1;:::;i maxdo4: Sample z(1);:::;z(n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;n5: Updateby descending its stochastic gradient (using e.g. ADADELTA (Zeiler, 2012)):rL(;k;ck)1nnXi=1rlogp(z(i)) +1nnXi=1rT(z(i))k+ck2nn2Xi=1rT(z(i))2nnXi=n2+1T(z(i))6: end for7: Sample z(1);:::;z(~n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;~n8: Updatek+1=k+ck1~nP~ni=1T(z(i))9: Updateck+1ck(see text for detail)10:end forz(i)(which are easy-to-sample iid draws from the multivariate normal p0), we compute the unbiasedestimator of H(p)with:H(p)1nnXi=1logp(f(z(i))) (7)R()can also be estimated without bias by taking a sample average of z(i)draws. The resultingoptimization procedure is detailed in Algorithm 1, of which step 9 requires some detail: denotingkas the resulting afterimax SGD iterations at the augmented Lagrangian iteration k, the usualupdate rule for c(Bertsekas, 2014) is:ck+1=ck, ifjjR(k+1)jj>jjR(k)jjck, otherwise(8)where2(0;1)and > 1. Monte Carlo estimation of R()sometimes caused cto be updatedtoo fast, causing numerical issues. Accordingly, we changed the hard update rule for cto a prob-abilistic update rule: a hypothesis test is carried out with null hypothesis H0:E[jjR(k+1)jj] =E[jjR(k)jj]and alternative hypothesis H1:E[jjR(k+1)jj]> E[jjR(k)jj]. Thep-valuepiscomputed, and ck+1is updated to ckwith probability 1p. We used a two-sample t-test to cal-culate thep-value. What results is a robust and novel algorithm for estimating maximum entropydistributions, while preserving the critical properties of being both easy to calculate densities ofparticular points, and being trivially able to produce truly iid samples.4 E XPERIMENTSWe first construct an ME problem with a known solution ( x4.1), and we analyze the MEFN algorithmwith respect to the ground truth and to an approximate Gibbs solution. These examples test thevalidity of our algorithm and illustrate its performance. xB andx4.3 then applies the MEFN to afinancial data application (predicting equity option values) and texture synthesis, respectively, toillustrate the flexibility and practicality of our algorithm.Forx4.1 andxB, We use 10 layers of planar flow with a final transformation g(specified below) thattransforms samples to the specified support, and use with ADADELTA (Zeiler, 2012). For x4.3 weuse real NVP structure and use ADAM (Kingma & Ba, 2014) with learning rate = 0:001. For all ourexperiments, we use imax= 3000 ,= 4,= 0:25. Forx4.1 andxB we usen= 300 ,~n= 1000 ,kmax= 10 ; Forx4.3 we usen= ~n= 2,kmax= 8.4.1 A MAXIMUM ENTROPY PROBLEM WITH KNOWN SOLUTIONFollowing the setup of the typical ME problem, suppose we are given a specified support S=fz=(z1;:::;zd1) :zi0andPd1k=1zk1gand a set of constraints E[logZk] =k(k= 1;:::;d ),4Published as a conference paper at ICLR 2017whereZd= 1Pd1k=1Zk. We then write the maximum entropy program:p=maximizeH(p) (9)subject toEZp[logZkk] = 08k= 1;:::;dsupp(p) =S:This is a general ME problem that can be solved via the MEFN. Of course, we have particularlychosen this example because, though it may not obviously appear so, the solution has a standard andtractable form, namely the Dirichlet. This choice allows us to consider a complicated optimizationprogram that happens to have known global optimum, providing a solid test bed for the MEFN (andfor the Gibbs approach against which we will compare). Specifically, given a parameter 2Rd,the Dirichlet has density:p(z1;:::;zd1) =1B()dYk=1zk1k1((z1;:::;zd1)2S) (10)whereB()is the multivariate Beta function, and zd= 1Pd1k=1zk. Note that this Dirichletis a distribution on Sand not on the (d1)-dimensional simplex Sd1=f(z1;:::;zd) :zk0andPdk=1zk= 1g(an often ignored and seemingly unimportant technicality that needs to becorrect here to ensure the proper transformation of measure). Connecting this familiar distribution tothe ME problem above, we simply have to choose such thatk= (k) (0)fork= 1;:::;d ,where0=Pdk=1kand is the digamma function. We then can pose the above ME problemto the MEFN and compare performance against ground truth. Before doing so, we must stipulatethe transformation gthat maps the Euclidean space of the multivariate normal p0to the desiredsupportS. Any sensible choice will work well (another point of flexibility for the MEFN); we usethe standard transformation:g(z1;:::;zd1) = ez1Pd1k=1ezk+ 1;:::;ezd1Pd1k=1ezk+ 1!>(11)Note that the MEFN outputs vectors in Rd1, and not Rd, because the Dirichlet is specified as adistribution onS(and not on the simplex Sd1). Accordingly, the Jacobian is a square matrix andits determinant can be computed efficiently using the matrix determinant lemma. Here, p0is set tothe(d1)-dimensional standard normal.We proceed as follows: We choose and compute the constraints 1;:::;d. We run MEFN pre-tending we do not know or the Dirichlet form. We then take a random sample from the fitteddistribution and a random sample from the Dirichlet with parameter , and compare the two sam-ples using the maximum mean discrepancy (MMD) kernel two sample test (Gretton et al., 2012),which assesses the fit quality. We take the sample size to be 300for the two sample kernel test.Figure 1 shows an example of the transformation from normal (left panel) to MEFN (middle panel),and comparing that to the ground truth Dirichlet (right panel). The MEFN and ground truth Dirichletdensities shown in purple match closely, and the samples drawn (red) indeed appear to be iid drawsfrom the same (maximum entropy) distribution in both cases.Additionally, the middle panel of Figure 1 shows an important cautionary tale that foreshadows ourtexture synthesis results ( x4.3). One might suppose that satisfying the moment matching constraintsis adequate to produce a distribution which, if not technically the ME distribution, is still inter-estingly variable. The middle panel shows the failure of this intuition: in dark green, we show anetwork trained to simply match the moments specified above, and the resulting distribution quitepoorly expresses the variability available to a distribution with these constraints, leading to samplesthat are needlessly similar. Given the substantial interest in using networks to learn implicit genera-tive models (e.g., Mohamed & Lakshminarayanan (2016)), this concern is particularly relevant andhighlights the importance of considering entropy.Figure 2 quantitatively analyzes these results. In the left panel, for a specific choice of = (1;2;3),we show our unbiased entropy estimate of the MEFN distribution pas a function of the numberof SGD iterations (red), along with the ground truth maximum entropy H(p)(green line). Note5Published as a conference paper at ICLR 2017Initial distribution p0 MEFN result p Ground truth pp0/uni00000037/uni00000055/uni00000058/uni00000048Figure 1: Example results from the ME problem with known Dirichlet ground truth. Left panel :The normal density p0(purple) and iid samples from p0(red points). Middle panel : The MEFNtransformsp0to the desired maximum entropy distribution pon the simplex (calculated densitypin purple). Truly iid samples are easily drawn from p(red points) by drawing from p0andmapping those points through f. Shown in the middle panel are the same points in the top leftpanel mapped through f. Samples corresponding to training the same network as MEFN to simplymatch the specified moments (ignoring entropy) are also shown (dark green points; see text). Rightpanel : The ground truth (in this example, known to be Dirichlet) distribution in purple, and iidsamples from it in red./uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000016/uni00000013/uni00000013/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000014/uni00000011/uni00000019/uni00000014/uni00000011/uni00000017/uni00000014/uni00000011/uni00000015/uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni00000019/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000028/uni00000056/uni00000057/uni0000004c/uni00000050/uni00000044/uni00000057/uni00000048/uni00000047/uni00000037/uni00000055/uni00000058/uni00000048/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000014 /uni00000013/uni00000011/uni00000013/uni00000015 /uni00000013/uni00000011/uni00000013/uni00000016MMD2u/uni00000013/uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013p(MMD2u)/uni00000030/uni00000028/uni00000029/uni00000031/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000013/uni00000013/uni0000001b/uni0000001b/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000014/uni00000013/uni00000031/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000047/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000016 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000018MMD2u/uni00000003p/uni00000010/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni0000002e/uni0000002f/uni00000030/uni00000028/uni00000029/uni00000031/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni00000056Figure 2: Quantitative analysis of simulation results. See text for description.that the MEFN stabilizes at the correct value (as a stochastic estimator, variance around that valueis expected). In the middle panel, we show the distribution of MMD values for the kernel twosample test, as well as the observed statistic for the MEFN (red) and for a randomly chosen Dirichletdistribution (gray; chosen to be close to the true optimum, making a conservative comparison). TheMMD test does not reject MEFN as being different from the true ME distribution p, but it doesreject a Dirichlet whose KL to the truepis small (see legend). In the right panel, for manydifferent Dirichlets in a small grid around a single true p, the kernel two sample test statistic iscomputed, the MMD p-value is calculated, as is the KL to the true distribution. We plot a scatterof these points in grey, and we plot the particular MEFN solution as a red star. We see that forother Dirichlets with similar KL to the true distribution as the MEFN distribution, the p-valuesseem uniform, meaning that the KLto the true is indeed very small. Again this is conservative, asthe grey points have access to the known Dirichlet form, whereas the MEFN considered the entirespace (within its network capacity) of Ssupported distributions. Given this fact, the performance ofMEFN is impressive.4.2 R ISK-NEUTRAL ASSET PRICINGWe illustrate the flexibility and practicality of our algorithm extracting the risk-neutral asset priceprobability based on option prices, an active and interesting area for ME models. We find that MEFNand the classic Gibbs approach yield comparable performances. Owing to space limitations we haveplaced these results in Appendix xB.4.3 M ODELING IMAGES OF TEXTURESConstructing generative models to generate random images with certain texture structure is an im-portant task in computer vision. A line of texture synthesis research proceeds by first extracting a set6Published as a conference paper at ICLR 2017of features that characterizes the target texture and then generate images that match the features. Theseminal work of Zhu et al. (1998) proposes constructing texture models under the ME framework,where features (or filters) of the given texture image are adaptively added in the model and a Gibbsdistribution whose expected feature matches the target texture is learnt. One major difficulty withthe method is that both model learning and image generation involve sampling from a complicatedGibbs distribution. More recent works exploit more complicated features (Portilla & Simoncelli,2000; Gatys et al., 2015; Ulyanov et al., 2016). Ulyanov et al. (2016) propose the texture net , whichuses a texture loss function by using the Gram matrices of the outputs of some convolutional layersof a pre-trained deep neural network for object recognition.While the use of these complicated features does provide high-quality synthetic texture images, thatwork focuses exclusively on generating images that match these feature (moments). Importantly,this network focuses only on generating feature-matching images without using the ME frameworkto promote the diversity of the samples. Doing so can be deeply problematic: in Figure 1 (middlepanel), we showed the lack of diversity resulting from only moment matching in that Dirichlet set-ting, and further we note that the extreme pathology would result in a point mass on the trainingimage – a global optimum for this objective, but obviously a terrible generative model for synthe-sizing textures. Ideally, the MEFN will match the moments andpromote sample diversity.We applied MEFN to texture synthesis with an RGB representation of the 224224pixel images,z2 Z = [0;1]d, whered= 2242243. We follow Ulyanov et al. (2016) (we adaptedhttps://github.com/ProofByConstruction/texture-networks ) to create a tex-ture loss measure T(z) : [0;1]d!R, and aim to sample a diverse set of images with small momentviolation. For the transformation family Fwe use the real NVP network structure proposed in Dinhet al. (2016) (we adapted https://github.com/taesung89/real-nvp ). We use 3resid-ual blocks with 32feature maps for each coupling layer and downscale 3times. For fair comparison,we use the same real NVP structure for both1, implemented in TensorFlow (Abadi et al., 2016).As is shown in top row of figure 3, both methods generate visually pleasing images capturing thetexture structure well. The bottom row of Figure 3 shows that texture cost (left panel) is similarfor both methods, while MEFN generates figures with much larger entropy than the texture networkformulation (middle panel), which is desirable (as previously discussed). The bottom right panelof figure 3 compares the marginal distribution of the RGB values sampled from the networks: wefound that MEFN generates a more variable distribution of RGB values than the texture net. Furtherresults are in Appendix xC.Input Texture net (Ulyanov et al., 2016) MEFN (ours)Texture cost Entropy RGB histogram05000 10000 15000 20000 25000Iteration1061071081091010Texture costTexture netsMEFN05000 10000 15000 20000 25000Iteration104105106Negative Entropy0.0 0.2 0.4 0.6 0.8 1.0RGB value0.00.51.01.52.02.5DensityFigure 3: Analysis of texture synthesis experiment. See text for description.1Ulyanov et al. (2016) use a quite different generative network structure, which is not invertible and istherefore infeasible for entropy evaluation, so we replace their generative network by the real NVP structure.7Published as a conference paper at ICLR 2017We compute in Table 1 the average pairwise Euclidean distance between randomly sampled images(dL2=meani6=jkzizjk22), and MEFN gives higher dL2, quantifying diversity across images. Wealso consider an ANOV A-style analysis to measure the diversity of the images, where we think ofthe RGB values for the same pixel across multiple images as a group, and compute the within andbetween group variance. Specifically, denoting zkias the pixel value for a specific pixel k= 1;:::;dfor an image i= 1;::::;n . We partition the total sum of square SST =Pi;k(zkiz)2as the withingroup error SSW =Pi;k(zkizk)2and between group error SSB =Pi;kn(zkz)2, wherezandzkare the mean pixel values across all data and for a specific pixel k. Ideally we want thesamples to exhibit large variability across images (large SSW, within a group/pixel) and no structurein the mean image (small SSB, across groups/pixels). Indeed, the MEFN has a larger SSW, implyinghigher variability around the mean image, a smaller SSB, implying the stationarity of the generatedsamples, and a larger SST, implying larger total variability also. The MEFN produces images thatare conclusively more variable without sacrificing the quality of the texture, implicating the broadutility of ME.Table 1: Quantitative measure of image diversity using 20randomly sampled imagesMethod dL2 SST SSW SSBTexture net 11534 128680 109577 19103MEFN 17014 175604 161639 139645 C ONCLUSIONIn this paper we propose a general framework for fitting ME models. This approach is novel andhas three key features. First, by learning a transformation of a simple distribution rather than thedistribution itself, we are able to avoid explicitly computing an intractable normalizing constant forthe ME distribution. Second, by combining stochastic optimization with the augmented Lagrangianmethod, we can fit the model efficiently, allowing us to evaluate the ME density of any point simplyand accurately. Third, critically, this construction allows us to trivially sample iid from a ME dis-tribution, extending the utility and efficiency of the ME framework more generally. Also, accuracyequivalent to the classic Gibbs approach is in itself a contribution (owing to these other features).We illustrate the MEFN in both a simulated case with known ground truth and real data examples.There are a few recent works encouraging sample diversity in the setting of texture model-ing. Ulyanov et al. (2017) extended Ulyanov et al. (2016) by adding a penalty term using theKozachenko-Leonenko estimator Kozachenko & Leonenko (1987) of entropy. Their generative net-work is an arbitrary deep neural network rather than a normalizing flow, which is more flexible butcannot give the probability density of each sample easily so as to compute an unbiased estimatorof the entropy. Kozachenko-Leonenko is a biased estimator for entropy and requires a fairly largenumber of samples to get good performance in high-dimensional settings, hindering the scalabilityand accuracy of the method; indeed, our choice of normalizing flow networks was driven by thesepractical issues with Kozachenko-Leonenko. Lu et al. (2016) extended Zhu et al. (1998) by usinga more flexible set of filters derived from a pre-trained deep neural networks, and using parallelMCMC chains to learn and sample from the Gibbs distribution. Running parallel MCMC chains re-sults in diverse samples but can be computationally intensive for generating each new sample image.Our MEFN framework enables truly iid sampling with the ease of a feed forward network.ACKNOWLEDGMENTSWe thank Evan Archer for normalizing flow code, and Xuexin Wei, Christian Andersson Naessethand Scott Linderman for helpful discussion. This work was supported by a Sloan Fellowship and aMcKnight Fellowship (JPC).8Published as a conference paper at ICLR 2017REFERENCESMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. A maximum entropy approachto natural language processing. Computational linguistics , 22(1):39–71, 1996.Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimationviak-nearest neighbour distances. arXiv preprint arXiv:1606.00304 , 2016.Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods . Academic press,2014.Oleg Bondarenko. Estimation of risk-neutral densities using positive convolution approximation.Journal of Econometrics , 116(1):85–112, 2003.Jonathan Borwein, Rustum Choksi, and Pierre Mar ́echal. Probability distributions of assets inferredfrom option prices via the principle of maximum entropy. SIAM Journal on Optimization , 14(2):464–478, 2003.Peter W Buchen and Michael Kelly. The maximum entropy distribution of an asset inferred fromoption prices. Journal of Financial and Quantitative Analysis , 31(01):143–159, 1996.Anna Choromanska, Mikael Henaff, Michael Mathieu, G ́erard Ben Arous, and Yann LeCun. Theloss surfaces of multilayer networks. In AISTATS , 2015.Michael Collins, Robert E Schapire, and Yoram Singer. Logistic regression, adaboost and bregmandistances. Machine Learning , 48(1-3):253–285, 2002.John N Darroch and Douglas Ratcliff. Generalized iterative scaling for log-linear models. Theannals of mathematical statistics , pp. 1470–1480, 1972.Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. Inducing features of random fields.IEEE transactions on pattern analysis and machine intelligence , 19(4):380–393, 1997.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXivpreprint arXiv:1605.08803 , 2016.Miroslav Dudik, Steven J Phillips, and Robert E Schapire. Performance guarantees for regularizedmaximum entropy density estimation. In International Conference on Computational LearningTheory , pp. 472–486. Springer, 2004.Stephen Figlewski. Estimating the implied risk neutral density. 2008.Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neuralnetworks. In Advances in Neural Information Processing Systems , pp. 262–270, 2015.Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch ̈olkopf, and Alexander Smola.A kernel two-sample test. Journal of Machine Learning Research , 13(Mar):723–773, 2012.Edwin T Jaynes. Information theory and statistical mechanics. Physical review , 106(4):620, 1957.Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of functionalsof discrete distributions. IEEE Transactions on Information Theory , 61(5):2835–2885, 2015.Kenji Kawaguchi. Deep learning without poor local minima. In Advances In Neural InformationProcessing Systems , pp. 586–594, 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.9Published as a conference paper at ICLR 2017LF Kozachenko and Nikolai N Leonenko. Sample estimate of the entropy of a random vector.Problemy Peredachi Informatsii , 23(2):9–16, 1987.Yang Lu, Song-chun Zhu, and Ying Nian Wu. Learning frame models using cnn filters. In ThirtiethAAAI Conference on Artificial Intelligence , 2016.Robert Malouf. A comparison of algorithms for maximum entropy parameter estimation. In pro-ceedings of the 6th conference on Natural language learning-Volume 20 , pp. 1–7. Association forComputational Linguistics, 2002.Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXivpreprint arXiv:1610.03483 , 2016.Steven J Phillips, Robert P Anderson, and Robert E Schapire. Maximum entropy modeling ofspecies geographic distributions. Ecological modelling , 190(3):231–259, 2006.Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Ex-ponential expressivity in deep neural networks through transient chaos. In Advances In NeuralInformation Processing Systems , pp. 3360–3368, 2016.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the ex-pressive power of deep neural networks. arXiv preprint arXiv:1606.05336 , 2016.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. On the convergence of bound op-timization algorithms. In Proceedings of the Nineteenth conference on Uncertainty in ArtificialIntelligence , pp. 509–516. Morgan Kaufmann Publishers Inc., 2002.Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417 , 2016.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maxi-mizing quality and diversity in feed-forward stylization and texture synthesis. arXiv preprintarXiv:1701.02096 , 2017.Paul Valiant and Gregory Valiant. Estimating the unseen: improved estimators for entropy and otherproperties. In Advances in Neural Information Processing Systems , pp. 2157–2165, 2013.Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 ,2012.Song Chun Zhu, Yingnian Wu, and David Mumford. Filters, random fields and maximum entropy(frame): Towards a unified theory for texture modeling. International Journal of Computer Vision ,27(2):107–126, 1998.10Published as a conference paper at ICLR 2017A A UGMENTED LAGRANGIAN CONDITIONSWe give a more thorough discussion of the regularity conditions which ensure that the AugmentedLagrangian method will work. The goal of this section is simply to state these conditions and giveintuitive arguments about why some should hold in our case, not to attempt to prove that they indeedhold. The conditions (Bertsekas, 2014) are:There exists a strict local minimum of the optimization problem of Equation 5:If the function class Fis rich enough that it contains a true solver of the maximum entropyproblem, then a global optimum exists. Although not rigorous, we would expect that evenin the finite expressivity case that a global optimum remains, and indeed, recent theoreticalwork (Raghu et al., 2016; Poole et al., 2016) has gotten close to proving this.is a regular point of the optimization problem, that is, the rows of rR()are linearlyindependent:Again, this is not formal, but we should not expect this to cause any issues. This clearlydepends on the specific form of T, but the condition basically says that there should not beredundant constraints at the optimum, so if Tis reasonable this shouldn’t happen.H(p)andR()are twice continuously differentiable on a neighborhood around :This holds by the smoothness of the normalizing flows.y>r2L(;;0)y >0for everyy6= 0 such thatrR()y= 0, whereis the trueLagrange multiplier:This condition is harder to justify. It would appear it is just asking that the Lagrangian(not the augmented Lagrangian) be strictly convex in feasible directions, but it is actuallystronger than this and some simple functions might not satisfy the property. For example,if the function we are optimizing was x4and we had no constraints, the Lagrangian’sHessian would be 12x2, which is 0atx= 0thus not satisfying the condition. Importantly,these conditions are sufficient but not necessary, so even if this doesn’t hold the augmentedLagrangian method might work (it certainly would for x4). Because of this and the non-rigorous justifications of the first two conditions, we left these conditions for the appendixand relied instead on the empirical performance to justify that we are indeed recovering themaximum entropy distribution.If all of these conditions hold, the augmented Lagrangian (for large enough candclose enough to) has a unique optimum in a neighborhood around that is close to (as!it converges to) and its hessian at this optimum is positive-definite. Furthermore, k!. This implies that gra-dient descent (with the usual caveats of being started close enough to the solution and with the rightsteps) will correctly recover using the augmented Lagrangian method. This of course just guar-antees convergence to a local optimum, but if there are no additional assumptions such as convexity,it can be very hard to ensure that it is indeed a global optimum. Some recent research has attemptedto explain why optimization algorithms perform so well for neural networks (Choromanska et al.,2015; Kawaguchi, 2016), but we leave such attempts for our case for future research.B R ISK-NEUTRAL ASSET PRICEWe extract the risk-neutral asset price probability distribution based on option prices, an active andinteresting area for ME models. We give a brief introduction of the problem and refer interestedreaders to see Buchen & Kelly (1996) for a more detailed explanation. Denoting Stas the priceof an asset at time t, the buyer of a European call option for the stock that expires at time tewithstrike priceKwill receive a payoff of cK= (SteK)+= max(SteK;0)at timete. Underthe efficient market assumption, the risk-neutral probability distribution for the stock price at timetesatisfies:cK=D(te)Eq[(SteK)+]; (12)whereD(te)is the risk-free discount factor and qis the risk-neutral measure. We also have that,under the risk-neutral measure, the current stock price S0is the discounted expected value of Ste:S0=D(te)Eq(Ste): (13)11Published as a conference paper at ICLR 2017When given moptions that expire at time tewith strikesK1;:::;Kmand pricescK1;:::;cKm, we getmexpectation constraints on q(Ste)from Equation 12, together with Equation 13, we have m+ 1expectation constraints in total. With that partial knowledge we can approximate q(Ste), which ishelpful for understanding the market expected volatility and identify mispricing in option markets,etc.Inferring the risk-neutral density of asset price from a finite number of option prices is an importantquestion in finance and has been studied extensively (Buchen & Kelly, 1996; Borwein et al., 2003;Bondarenko, 2003; Figlewski, 2008). One popular method proposed by Buchen & Kelly (1996)estimates the probability density as the maximum entropy distribution satisfying the expectationconstraints and a positivity support constraint by fitting a Gibbs distribution, which results in apiece-wise linear log density:p(z)/exp(0z+mXi=1i(zKi)+)1(z0) (14)and optimize the distribution with numerical methods. Here we compare the performance of theMEFN algorithm with the method proposed in Buchen & Kelly (1996). To enforce the positivityconstraint we choose g(z) =eaz+b, whereaandbare additional parameters.We collect the closing price of European call options on Nov. 1 2016 for the stock AAPL (Appleinc.) that expires on te=Jun. 16 2017. We use m= 4of the options with highest trading volume astraining data and the rest as testing data. On the left panel of figure 4, we show the fitted risk-neutraldensity ofSteby MEFN (red line) with that of the fitted Gibbs distribution result (blue line). Wefind that while the distributions share similar location and variability, the distribution inferred byMEFN is smoother and arguably more plausible. In the middle panel we show a Q-Q plot of thequantiles of the MEFN and Gibbs distributions. We can see that the quantile pairs match the identityclosely, which should happen if both methods recovered the exact same distribution. This highlightsthe effectiveness of MEFN. There does exist a small mismatch in the tails: the distribution inferredby MEFN has slightly heavier tails. This mismatch is difficult to interpret: given that both the Gibbsand MEFN distributions are fit with option price data (and given that one can observe at most onevalue from the distribution, namely the stock price at expiration), it is fundamentally unclear whichdistribution is superior, in the sense of better capturing the true ME distribution’s tails. On the rightpanel we show the fitted option price for the two fitted distributions (for each strike price, we canrecover the fitted option price by Equation 12). We noted that the fitted option price and strike pricelines for both methods are very similar (they are mostly indiscernible on the right panel of figure4). We also compare the fitted performance on the test data by computing the root mean squareerror for the fitted and test data. We observe that the predictive performances for both methods arecomparable.0 50 100 150 200Price (dollars)0.0000.0050.0100.0150.0200.0250.0300.035DensityGibbsMEFN0 50 100 150 200 250Gibbs Quantiles050100150200250300MEFN Quantilesidentity0 50 100 150Strike price (dollars)20020406080100120Option price (dollars)Gibbs, RMSE=2.43MEFN, RMSE=2.39Training dataTesting dataFigure 4: Constructing risk-neutral measure from observed option price. Left panel : fitted risk-neutral measure by Gibbs and MEFN method. Middle panel : Q-Q plot for the quantiles from thedistributions on the left panel. Right panel : observed and fitted option price for different strikes.We note that for this specific application, there are practical concerns such as the microstructurenoise in the data and inefficiency in the market, etc. Applying a pre-processing procedure and incor-porating prior assumptions can be helpful for getting a more full-fledged method (see e.g. Figlewski(2008)). Here we mainly focus on illustrating the ability of the MEFN method to approximate theME distribution for non-typical distributions. Future work for this application includes fitting a risk-neutral distribution for multi-dimensional assets by incorporating dependence structure on assets.12Published as a conference paper at ICLR 2017C M ODELING IMAGES OF TEXTURESWe tried our texture modeling approach with many different textures, and although MEFN samplesdon’t always exhibit more visual diversity than samples obtained from the texture network, theyalways have more entropy as in figure 3. Figure 5 shows two positive examples, i.e. textures inwhich samples from MEFN do exhibit higher visual diversity than those from the texture network, aswell as a negative example, in which MEFN achieves less visual diversity than the texture network,regardless of the fact that MEFN samples do have larger entropy. We hypothesize that this curiousbehavior is due to the optimization achieving a local optimum in which the brick boundaries anddark brick locations are not diverse but the entropy within each brick is large. It should also benoted that among the experiments that we ran, this was the only negative example that we got, andthat slightly modifying the hyperparameters caused the issue to disappear.Input(positive example)Input(positive example)Input(negative example)Texture net (Ulyanov et al. (2016), less sample diversity)MEFN (ours, more sample diversity)Figure 5: MEFN and texture network samples.13
ryivmuWNg
H1acq85gx
ICLR.cc/2017/conference/-/paper317/official/review
{"title": "Flexible Maximum Entropy Models", "rating": "9: Top 15% of accepted papers, strong accept", "review": "Much existing deep learning literature focuses on likelihood based models. However maximum entropy approaches are an equally valid modelling scenario, where information is given in terms of constraints rather than data. That there is limited work in flexible maximum entropy neural models is surprising, but is due to the fact that optimizing a maximum entropy model requires (a) establishing the effect of the constraints on some distribution, and formulating the entropy of that complex distribution. There is no unbiased estimator of entropy from samples alone, and so an explicit model for the density is needed. This challenge limits approaches. The authors have identified that invertible neural models provide a powerful class of models for solving the maximum entropy network problem, and this paper goes on to establish this approach. The contributions of this paper are (a) recognising that, because normalising flows provide an explicit model for the density, they can be used to provide unbiased estimators for the entropy (b) that the resulting Lagrangian can be implemented as a relaxation of a augmented Lagrangian (c) establishing the practical issues in doing the augmented Lagrangian optimization. As far as the reviewer is aware this work is novel \u2013 this approach is natural and sensible, and is demonstrated on an number of models where clear evaluation can be done. Enough experiments have been done to establish this is an appropriate method, though not that it is entirely necessary \u2013 it would be good to have an example where the benefits of the flexible flow transformation were much clearer. Further discussion of the computational and scaling aspects would be valuable. I am guessing this approach is probably appropriate for model learning, but less appropriate for inferential settings where a known model is then conditioned on particular instance based constraints? Some discussion of appropriate use cases would be good. The issue of match to the theory via the regularity conditions has been brought up, but it is clear that this can be described well, and exceeds most of the theoretical discussions that occur regarding the numerical methods in other papers in this field.\n\nQuality: Good sound paper providing a novel basis for flexible maximum entropy models.\nClarity: Good.\nOriginality: Refreshing.\nSignificance: Significant in model development terms. Whether it will be an oft-used method is not clear at this stage.\n\nMinor issues\n\nPlease label all equations. Others might wish to refer to them even if you don\u2019t.\nTop of page 4: algorithm 1\u2192 Algorithm 1.\nThe update for c to overcome stability appears slightly opaque and is mildly worrying. I assume there are still residual stability issues? Can you comment on why this solves all the problems?\nThe issue of the support of p is glossed over a little. Is the support in 5 an additional condition on the support of p? If so, that seems hard to encode, and indeed does not turn up in (6). I guess for a Gaussian p0 and invertible unbounded transformations, if the support happens to be R^d, then this is trivial, but for more general settings this seems to be an issue that you have not dealt? Indeed in your Dirichlet example, you explicitly map to the required support, but for more complex constraints this may be non trivial to do with invertible models with known Jacobian? It would be nice to include this in the more general treatment rather than just relegating it to the specific example.\n\nOverall I am very pleased to see someone tackling this question with a very natural approach.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Maximum Entropy Flow Networks
["Gabriel Loaiza-Ganem *", "Yuanjun Gao *", "John P. Cunningham"]
Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge. In this paper, rather than the traditional method of optimizing over the continuous density directly, we learn a smooth and invertible transformation that maps a simple distribution to the desired maximum entropy distribution. Doing so is nontrivial in that the objective being maximized (entropy) is a function of the density itself. By exploiting recent developments in normalizing flow networks, we cast the maximum entropy problem into a finite-dimensional constrained optimization, and solve the problem by combining stochastic optimization with the augmented Lagrangian method. Simulation results demonstrate the effectiveness of our method, and applications to finance and computer vision show the flexibility and accuracy of using maximum entropy flow networks.
["flexible", "popular framework", "statistical models", "partial knowledge", "traditional", "continuous density", "smooth", "invertible transformation", "simple distribution"]
https://openreview.net/forum?id=H1acq85gx
https://openreview.net/pdf?id=H1acq85gx
https://openreview.net/forum?id=H1acq85gx&noteId=ryivmuWNg
Published as a conference paper at ICLR 2017MAXIMUM ENTROPY FLOW NETWORKSGabriel Loaiza-Ganem, Yuanjun Gao& John P. CunninghamDepartment of StatisticsColumbia UniversityNew York, NY 10027, USAfgl2480,yg2312,jpc2181 g@columbia.eduABSTRACTMaximum entropy modeling is a flexible and popular framework for formulat-ing statistical models given partial knowledge. In this paper, rather than the tra-ditional method of optimizing over the continuous density directly, we learn asmooth and invertible transformation that maps a simple distribution to the de-sired maximum entropy distribution. Doing so is nontrivial in that the objectivebeing maximized (entropy) is a function of the density itself. By exploiting recentdevelopments in normalizing flow networks, we cast the maximum entropy prob-lem into a finite-dimensional constrained optimization, and solve the problem bycombining stochastic optimization with the augmented Lagrangian method. Sim-ulation results demonstrate the effectiveness of our method, and applications tofinance and computer vision show the flexibility and accuracy of using maximumentropy flow networks.1 I NTRODUCTIONThe maximum entropy (ME) principle (Jaynes, 1957) states that subject to some given prior knowl-edge, typically some given list of moment constraints, the distribution that makes minimal additionalassumptions – and is therefore appropriate for a range of applications from hypothesis testing to priceforecasting to texture synthesis – is that which has the largest entropy of any distribution obeyingthose constraints. First introduced in statistical mechanics by Jaynes (1957), and considered bothcelebrated and controversial, ME has been extensively applied in areas including natural languageprocessing (Berger et al., 1996), ecology (Phillips et al., 2006), finance (Buchen & Kelly, 1996),computer vision (Zhu et al., 1998), and many more.Continuous ME modeling problems typically include certain expectation constraints, and are usuallysolved by introducing Lagrange multipliers, which under typical assumptions yields an exponentialfamily distribution (also called Gibbs distribution) with natural parameters such that the expectationconstraints are obeyed. Unfortunately, fitting ME distributions in even modest dimensions posessignificant challenges. First, optimizing the Lagrangian for a Gibbs distribution requires evaluatingthe normalizing constant, which is in general computationally very costly and error prone. Secondly,in all but the rarest cases, there is no way to draw samples independently and identically from thisGibbs distribution, even if one could derive it. Third, unlike in the discrete case where a number ofrecent and exciting works have addressed the problem of estimating entropy from discrete-valueddata (Jiao et al., 2015; Valiant & Valiant, 2013), estimating differential entropy from data samplesremains inefficient and typically biased. These shortcomings are critical and costly, given the com-mon use of ME distributions for generating reference data samples for a null distribution of a teststatistic. There is thus ample need for a method that can both solve the ME problem and produce asolution that is easy and fast to sample.In this paper we develop maximum entropy flow networks (MEFN), a stochastic-optimization-basedframework and algorithm for fitting continuous maximum entropy models. Two key steps are re-quired. First, conceptually, we replace the idea of maximizing entropy over a density directly withmaximizing, over the parameter space of an indexed function family, the entropy of the densityinduced by mapping a simple distribution (a Gaussian) through that optimized function. ModernThese authors contributed equally.1Published as a conference paper at ICLR 2017neural networks, particularly in variational inference (Kingma & Welling, 2013; Rezende & Mo-hamed, 2015), have successfully employed this same idea to generate complex distributions, andwe look to similar technologies. Secondly, unlike most other objectives in this network literature,the entropy objective itself requires evaluation of the target density directly, which is unavailablein most traditional architectures. We overcome this potential issue by learning a smooth, invertibletransformation that maps a simple distribution to an (approximate) ME distribution. Recent develop-ments in normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016) allow us to avoid biasedand computationally inefficient estimators of differential entropy (such as the nearest-neighbor classof estimators like that of Kozachenko-Leonenko; see Berrett et al. (2016)). Our approach avoidscalculation of normalizing constants by learning a map with an easy-to-compute Jacobian, yieldingtractable probability density computation. The resulting transformation also allows us to reliablygenerate iid samples from the learned ME distribution. We demonstrate MEFN in detail in ex-amples where we can access ground truth, and then we demonstrate further the ability of MEFNnetworks in equity option prices fitting and texture synthesis.Primary contributions of this work include: (i)addressing the substantial need for methods to sampleME distributions; (ii)introducing ME problems, and the value of including entropy in a range ofgenerative modeling problems, to the deep learning community; (iii)the novel use of constrainedoptimization for a deep learning application; and (iv)the application of MEFN to option pricingand texture synthesis, where in the latter we show significant increase in the diversity of synthesizedtextures (over current state of the art) by using MEFN.2 B ACKGROUND2.1 M AXIMUM ENTROPY MODELING AND GIBBS DISTRIBUTIONWe consider a continuous random variable Z2Z Rdwith density p, wherephas differentialentropyH(p) =Rp(z) logp(z)dzand support supp(p). The goal of ME modeling is to find, andthen be able to easily sample from, the maximum entropy distribution given a set of moment andsupport constraints, namely the solution to:p=maximizeH(p) (1)subject toEZp[T(Z)] = 0supp(p) =Z;whereT(z) = (T1(z);:::;Tm(z)) :Z!Rmis the vector of known (assumed sufficient) statistics,andZis the given support of the distribution. Under standard regularity conditions, the optimizationproblem can be solved by Lagrange multipliers, yielding an exponential family pof the form:p(z)/e>T(z)1(z2Z) (2)where2Rmis the choice of natural parameters of psuch thatEp[T(Z)] = 0 . Despite thissimple form, these distributions are only in rare cases tractable from the standpoint of calculating, calculating the normalizing constant of p, and sampling from the resulting distribution. Thereis extensive literature on finding numerically (Darroch & Ratcliff, 1972; Salakhutdinov et al.,2002; Della Pietra et al., 1997; Dudik et al., 2004; Malouf, 2002; Collins et al., 2002), but doing sorequires computing normalizing constants, which poses a challenge even for problems with modestdimensions. Also, even if is correctly found, it is still not trivial to sample from p. Problem-specific sampling methods (such as importance sampling, MCMC, etc.) have to be designed andused, which is in general challenging (burn-in, mixing time, etc.) and computationally burdensome.2.2 N ORMALIZING FLOWSFollowing Rezende & Mohamed (2015), we define a normalizing flow as the transformation ofa probability density through a sequence of invertible mappings. Normalizing flows provide anelegant way of generating a complicated distribution while maintaining tractable density evaluation.Starting with a simple distribution Z02Rdp0(usually taken to be a standard multivariate2Published as a conference paper at ICLR 2017Gaussian), and by applying kinvertible and smooth functions fi:Rd!Rd(i= 1;:::;k ), theresulting variable Zk=fkfk1f1(Z0)has density:pk(zk) =p0(f11f12f1k(zk))kYi=1jdet(Ji(zi1))j1; (3)whereJiis the Jacobian of fi. If the determinant of Jican be easily computed, pkcan be computedefficiently.Rezende & Mohamed (2015) proposed two specific families of transformations for variational in-ference, namely planar flows and radial flows, respectively:fi(z) =z+uih(wTiz+bi) andfi(z) =z+ih(i;ri)(zz0i); (4)wherebi2R,ui;wi2Rdandhis an activation function in the planar case, and where i2R,i>0,z0i2Rd,h(;r) = 1=(+r)andri=jjzz0ijjin the radial. Recently Dinhet al. (2016) proposed a normalizing flow with convolutional, multiscale structure that is suitable forimage modeling and has shown promise in density estimation for natural images.3 M AXIMUM ENTROPY FLOW NETWORK (MEFN) ALGORITHM3.1 F ORMULATIONInstead of solving Equation 2, we propose solving Equation 1 directly by optimizing a trans-formation that maps a random variable Z0, with simple distribution p0, to the ME distribution.Given a parametric family of normalizing flows F=ff;2Rqg, we denote p(z) =p0(f1(z))jdet(J(z))j1as the distribution of the variable f(Z0), whereJis the Jacobianoff. We then rewrite the ME problem as:=maximizeH(p) (5)subject toEZ0p0[T(f(Z0))] = 0supp(p) =Z:Whenp0is continuous andFis suitably general, the program in Equation 5 recovers the ME dis-tributionpexactly. With a flexible transformation family, the ME distribution can be well approx-imated. In experiments we found that taking p0to be a standard multivariate normal distributionachieves good empirical performance. Taking p0to be a bounded distribution (e.g. uniform distri-bution) is problematic for learning transformations near the boundary, and heavy tailed distributions(e.g. Cauchy distribution) caused similar trouble due to large numbers of outliers.3.2 A LGORITHMWe solved Equation 5 using the augmented Lagrangian method. Denote R() =E(T(f(Z0))),the augmented Lagrangian method uses the following objective:L(;;c) =H(p) +>R() +c2jjR()jj2(6)where2Rmis the Lagrange multiplier and c>0is the penalty coefficient. We minimize Equa-tion6for a non-decreasing sequence of cand well-chosen . As a technical note, the augmentedLagrangian method is guaranteed to converge under some regularity conditions (Bertsekas, 2014).As is usual in neural networks, a proof of these conditions is challenging and not yet available,though intuitive arguments (see Appendix xA) suggest that most of them should hold. Due to thenon rigorous nature of these arguments, we rely on the empirical results of the algorithm to claimthat it is indeed solving the optimization problem.For a fixed (;c)pair, we optimize Lwith stochastic gradient descent. Owing to our choice ofnetwork and the resulting ability to efficiently calculate the density p(z(i))for any sample point3Published as a conference paper at ICLR 2017Algorithm 1 Training the MEFN1:initialize=0, setc0>0and0.2:forAugmented Lagrangian iteration k= 1;:::;k maxdo3: forSGD iteration i= 1;:::;i maxdo4: Sample z(1);:::;z(n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;n5: Updateby descending its stochastic gradient (using e.g. ADADELTA (Zeiler, 2012)):rL(;k;ck)1nnXi=1rlogp(z(i)) +1nnXi=1rT(z(i))k+ck2nn2Xi=1rT(z(i))2nnXi=n2+1T(z(i))6: end for7: Sample z(1);:::;z(~n)p0, get transformed variables z(i)=f(z(i));i= 1;:::;~n8: Updatek+1=k+ck1~nP~ni=1T(z(i))9: Updateck+1ck(see text for detail)10:end forz(i)(which are easy-to-sample iid draws from the multivariate normal p0), we compute the unbiasedestimator of H(p)with:H(p)1nnXi=1logp(f(z(i))) (7)R()can also be estimated without bias by taking a sample average of z(i)draws. The resultingoptimization procedure is detailed in Algorithm 1, of which step 9 requires some detail: denotingkas the resulting afterimax SGD iterations at the augmented Lagrangian iteration k, the usualupdate rule for c(Bertsekas, 2014) is:ck+1=ck, ifjjR(k+1)jj>jjR(k)jjck, otherwise(8)where2(0;1)and > 1. Monte Carlo estimation of R()sometimes caused cto be updatedtoo fast, causing numerical issues. Accordingly, we changed the hard update rule for cto a prob-abilistic update rule: a hypothesis test is carried out with null hypothesis H0:E[jjR(k+1)jj] =E[jjR(k)jj]and alternative hypothesis H1:E[jjR(k+1)jj]> E[jjR(k)jj]. Thep-valuepiscomputed, and ck+1is updated to ckwith probability 1p. We used a two-sample t-test to cal-culate thep-value. What results is a robust and novel algorithm for estimating maximum entropydistributions, while preserving the critical properties of being both easy to calculate densities ofparticular points, and being trivially able to produce truly iid samples.4 E XPERIMENTSWe first construct an ME problem with a known solution ( x4.1), and we analyze the MEFN algorithmwith respect to the ground truth and to an approximate Gibbs solution. These examples test thevalidity of our algorithm and illustrate its performance. xB andx4.3 then applies the MEFN to afinancial data application (predicting equity option values) and texture synthesis, respectively, toillustrate the flexibility and practicality of our algorithm.Forx4.1 andxB, We use 10 layers of planar flow with a final transformation g(specified below) thattransforms samples to the specified support, and use with ADADELTA (Zeiler, 2012). For x4.3 weuse real NVP structure and use ADAM (Kingma & Ba, 2014) with learning rate = 0:001. For all ourexperiments, we use imax= 3000 ,= 4,= 0:25. Forx4.1 andxB we usen= 300 ,~n= 1000 ,kmax= 10 ; Forx4.3 we usen= ~n= 2,kmax= 8.4.1 A MAXIMUM ENTROPY PROBLEM WITH KNOWN SOLUTIONFollowing the setup of the typical ME problem, suppose we are given a specified support S=fz=(z1;:::;zd1) :zi0andPd1k=1zk1gand a set of constraints E[logZk] =k(k= 1;:::;d ),4Published as a conference paper at ICLR 2017whereZd= 1Pd1k=1Zk. We then write the maximum entropy program:p=maximizeH(p) (9)subject toEZp[logZkk] = 08k= 1;:::;dsupp(p) =S:This is a general ME problem that can be solved via the MEFN. Of course, we have particularlychosen this example because, though it may not obviously appear so, the solution has a standard andtractable form, namely the Dirichlet. This choice allows us to consider a complicated optimizationprogram that happens to have known global optimum, providing a solid test bed for the MEFN (andfor the Gibbs approach against which we will compare). Specifically, given a parameter 2Rd,the Dirichlet has density:p(z1;:::;zd1) =1B()dYk=1zk1k1((z1;:::;zd1)2S) (10)whereB()is the multivariate Beta function, and zd= 1Pd1k=1zk. Note that this Dirichletis a distribution on Sand not on the (d1)-dimensional simplex Sd1=f(z1;:::;zd) :zk0andPdk=1zk= 1g(an often ignored and seemingly unimportant technicality that needs to becorrect here to ensure the proper transformation of measure). Connecting this familiar distribution tothe ME problem above, we simply have to choose such thatk= (k) (0)fork= 1;:::;d ,where0=Pdk=1kand is the digamma function. We then can pose the above ME problemto the MEFN and compare performance against ground truth. Before doing so, we must stipulatethe transformation gthat maps the Euclidean space of the multivariate normal p0to the desiredsupportS. Any sensible choice will work well (another point of flexibility for the MEFN); we usethe standard transformation:g(z1;:::;zd1) = ez1Pd1k=1ezk+ 1;:::;ezd1Pd1k=1ezk+ 1!>(11)Note that the MEFN outputs vectors in Rd1, and not Rd, because the Dirichlet is specified as adistribution onS(and not on the simplex Sd1). Accordingly, the Jacobian is a square matrix andits determinant can be computed efficiently using the matrix determinant lemma. Here, p0is set tothe(d1)-dimensional standard normal.We proceed as follows: We choose and compute the constraints 1;:::;d. We run MEFN pre-tending we do not know or the Dirichlet form. We then take a random sample from the fitteddistribution and a random sample from the Dirichlet with parameter , and compare the two sam-ples using the maximum mean discrepancy (MMD) kernel two sample test (Gretton et al., 2012),which assesses the fit quality. We take the sample size to be 300for the two sample kernel test.Figure 1 shows an example of the transformation from normal (left panel) to MEFN (middle panel),and comparing that to the ground truth Dirichlet (right panel). The MEFN and ground truth Dirichletdensities shown in purple match closely, and the samples drawn (red) indeed appear to be iid drawsfrom the same (maximum entropy) distribution in both cases.Additionally, the middle panel of Figure 1 shows an important cautionary tale that foreshadows ourtexture synthesis results ( x4.3). One might suppose that satisfying the moment matching constraintsis adequate to produce a distribution which, if not technically the ME distribution, is still inter-estingly variable. The middle panel shows the failure of this intuition: in dark green, we show anetwork trained to simply match the moments specified above, and the resulting distribution quitepoorly expresses the variability available to a distribution with these constraints, leading to samplesthat are needlessly similar. Given the substantial interest in using networks to learn implicit genera-tive models (e.g., Mohamed & Lakshminarayanan (2016)), this concern is particularly relevant andhighlights the importance of considering entropy.Figure 2 quantitatively analyzes these results. In the left panel, for a specific choice of = (1;2;3),we show our unbiased entropy estimate of the MEFN distribution pas a function of the numberof SGD iterations (red), along with the ground truth maximum entropy H(p)(green line). Note5Published as a conference paper at ICLR 2017Initial distribution p0 MEFN result p Ground truth pp0/uni00000037/uni00000055/uni00000058/uni00000048Figure 1: Example results from the ME problem with known Dirichlet ground truth. Left panel :The normal density p0(purple) and iid samples from p0(red points). Middle panel : The MEFNtransformsp0to the desired maximum entropy distribution pon the simplex (calculated densitypin purple). Truly iid samples are easily drawn from p(red points) by drawing from p0andmapping those points through f. Shown in the middle panel are the same points in the top leftpanel mapped through f. Samples corresponding to training the same network as MEFN to simplymatch the specified moments (ignoring entropy) are also shown (dark green points; see text). Rightpanel : The ground truth (in this example, known to be Dirichlet) distribution in purple, and iidsamples from it in red./uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000016/uni00000013/uni00000013/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000014/uni00000011/uni00000019/uni00000014/uni00000011/uni00000017/uni00000014/uni00000011/uni00000015/uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni00000019/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000028/uni00000056/uni00000057/uni0000004c/uni00000050/uni00000044/uni00000057/uni00000048/uni00000047/uni00000037/uni00000055/uni00000058/uni00000048/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000014 /uni00000013/uni00000011/uni00000013/uni00000015 /uni00000013/uni00000011/uni00000013/uni00000016MMD2u/uni00000013/uni00000018/uni00000013/uni00000014/uni00000013/uni00000013/uni00000014/uni00000018/uni00000013/uni00000015/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013p(MMD2u)/uni00000030/uni00000028/uni00000029/uni00000031/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000013/uni00000013/uni0000001b/uni0000001b/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni0000000f/uni00000003/uni0000002e/uni0000002f/uni00000020/uni00000013/uni00000011/uni00000014/uni00000013/uni00000031/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000047/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000016 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000018MMD2u/uni00000003p/uni00000010/uni00000059/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni0000002e/uni0000002f/uni00000030/uni00000028/uni00000029/uni00000031/uni00000027/uni0000004c/uni00000055/uni0000004c/uni00000046/uni0000004b/uni0000004f/uni00000048/uni00000057/uni00000056Figure 2: Quantitative analysis of simulation results. See text for description.that the MEFN stabilizes at the correct value (as a stochastic estimator, variance around that valueis expected). In the middle panel, we show the distribution of MMD values for the kernel twosample test, as well as the observed statistic for the MEFN (red) and for a randomly chosen Dirichletdistribution (gray; chosen to be close to the true optimum, making a conservative comparison). TheMMD test does not reject MEFN as being different from the true ME distribution p, but it doesreject a Dirichlet whose KL to the truepis small (see legend). In the right panel, for manydifferent Dirichlets in a small grid around a single true p, the kernel two sample test statistic iscomputed, the MMD p-value is calculated, as is the KL to the true distribution. We plot a scatterof these points in grey, and we plot the particular MEFN solution as a red star. We see that forother Dirichlets with similar KL to the true distribution as the MEFN distribution, the p-valuesseem uniform, meaning that the KLto the true is indeed very small. Again this is conservative, asthe grey points have access to the known Dirichlet form, whereas the MEFN considered the entirespace (within its network capacity) of Ssupported distributions. Given this fact, the performance ofMEFN is impressive.4.2 R ISK-NEUTRAL ASSET PRICINGWe illustrate the flexibility and practicality of our algorithm extracting the risk-neutral asset priceprobability based on option prices, an active and interesting area for ME models. We find that MEFNand the classic Gibbs approach yield comparable performances. Owing to space limitations we haveplaced these results in Appendix xB.4.3 M ODELING IMAGES OF TEXTURESConstructing generative models to generate random images with certain texture structure is an im-portant task in computer vision. A line of texture synthesis research proceeds by first extracting a set6Published as a conference paper at ICLR 2017of features that characterizes the target texture and then generate images that match the features. Theseminal work of Zhu et al. (1998) proposes constructing texture models under the ME framework,where features (or filters) of the given texture image are adaptively added in the model and a Gibbsdistribution whose expected feature matches the target texture is learnt. One major difficulty withthe method is that both model learning and image generation involve sampling from a complicatedGibbs distribution. More recent works exploit more complicated features (Portilla & Simoncelli,2000; Gatys et al., 2015; Ulyanov et al., 2016). Ulyanov et al. (2016) propose the texture net , whichuses a texture loss function by using the Gram matrices of the outputs of some convolutional layersof a pre-trained deep neural network for object recognition.While the use of these complicated features does provide high-quality synthetic texture images, thatwork focuses exclusively on generating images that match these feature (moments). Importantly,this network focuses only on generating feature-matching images without using the ME frameworkto promote the diversity of the samples. Doing so can be deeply problematic: in Figure 1 (middlepanel), we showed the lack of diversity resulting from only moment matching in that Dirichlet set-ting, and further we note that the extreme pathology would result in a point mass on the trainingimage – a global optimum for this objective, but obviously a terrible generative model for synthe-sizing textures. Ideally, the MEFN will match the moments andpromote sample diversity.We applied MEFN to texture synthesis with an RGB representation of the 224224pixel images,z2 Z = [0;1]d, whered= 2242243. We follow Ulyanov et al. (2016) (we adaptedhttps://github.com/ProofByConstruction/texture-networks ) to create a tex-ture loss measure T(z) : [0;1]d!R, and aim to sample a diverse set of images with small momentviolation. For the transformation family Fwe use the real NVP network structure proposed in Dinhet al. (2016) (we adapted https://github.com/taesung89/real-nvp ). We use 3resid-ual blocks with 32feature maps for each coupling layer and downscale 3times. For fair comparison,we use the same real NVP structure for both1, implemented in TensorFlow (Abadi et al., 2016).As is shown in top row of figure 3, both methods generate visually pleasing images capturing thetexture structure well. The bottom row of Figure 3 shows that texture cost (left panel) is similarfor both methods, while MEFN generates figures with much larger entropy than the texture networkformulation (middle panel), which is desirable (as previously discussed). The bottom right panelof figure 3 compares the marginal distribution of the RGB values sampled from the networks: wefound that MEFN generates a more variable distribution of RGB values than the texture net. Furtherresults are in Appendix xC.Input Texture net (Ulyanov et al., 2016) MEFN (ours)Texture cost Entropy RGB histogram05000 10000 15000 20000 25000Iteration1061071081091010Texture costTexture netsMEFN05000 10000 15000 20000 25000Iteration104105106Negative Entropy0.0 0.2 0.4 0.6 0.8 1.0RGB value0.00.51.01.52.02.5DensityFigure 3: Analysis of texture synthesis experiment. See text for description.1Ulyanov et al. (2016) use a quite different generative network structure, which is not invertible and istherefore infeasible for entropy evaluation, so we replace their generative network by the real NVP structure.7Published as a conference paper at ICLR 2017We compute in Table 1 the average pairwise Euclidean distance between randomly sampled images(dL2=meani6=jkzizjk22), and MEFN gives higher dL2, quantifying diversity across images. Wealso consider an ANOV A-style analysis to measure the diversity of the images, where we think ofthe RGB values for the same pixel across multiple images as a group, and compute the within andbetween group variance. Specifically, denoting zkias the pixel value for a specific pixel k= 1;:::;dfor an image i= 1;::::;n . We partition the total sum of square SST =Pi;k(zkiz)2as the withingroup error SSW =Pi;k(zkizk)2and between group error SSB =Pi;kn(zkz)2, wherezandzkare the mean pixel values across all data and for a specific pixel k. Ideally we want thesamples to exhibit large variability across images (large SSW, within a group/pixel) and no structurein the mean image (small SSB, across groups/pixels). Indeed, the MEFN has a larger SSW, implyinghigher variability around the mean image, a smaller SSB, implying the stationarity of the generatedsamples, and a larger SST, implying larger total variability also. The MEFN produces images thatare conclusively more variable without sacrificing the quality of the texture, implicating the broadutility of ME.Table 1: Quantitative measure of image diversity using 20randomly sampled imagesMethod dL2 SST SSW SSBTexture net 11534 128680 109577 19103MEFN 17014 175604 161639 139645 C ONCLUSIONIn this paper we propose a general framework for fitting ME models. This approach is novel andhas three key features. First, by learning a transformation of a simple distribution rather than thedistribution itself, we are able to avoid explicitly computing an intractable normalizing constant forthe ME distribution. Second, by combining stochastic optimization with the augmented Lagrangianmethod, we can fit the model efficiently, allowing us to evaluate the ME density of any point simplyand accurately. Third, critically, this construction allows us to trivially sample iid from a ME dis-tribution, extending the utility and efficiency of the ME framework more generally. Also, accuracyequivalent to the classic Gibbs approach is in itself a contribution (owing to these other features).We illustrate the MEFN in both a simulated case with known ground truth and real data examples.There are a few recent works encouraging sample diversity in the setting of texture model-ing. Ulyanov et al. (2017) extended Ulyanov et al. (2016) by adding a penalty term using theKozachenko-Leonenko estimator Kozachenko & Leonenko (1987) of entropy. Their generative net-work is an arbitrary deep neural network rather than a normalizing flow, which is more flexible butcannot give the probability density of each sample easily so as to compute an unbiased estimatorof the entropy. Kozachenko-Leonenko is a biased estimator for entropy and requires a fairly largenumber of samples to get good performance in high-dimensional settings, hindering the scalabilityand accuracy of the method; indeed, our choice of normalizing flow networks was driven by thesepractical issues with Kozachenko-Leonenko. Lu et al. (2016) extended Zhu et al. (1998) by usinga more flexible set of filters derived from a pre-trained deep neural networks, and using parallelMCMC chains to learn and sample from the Gibbs distribution. Running parallel MCMC chains re-sults in diverse samples but can be computationally intensive for generating each new sample image.Our MEFN framework enables truly iid sampling with the ease of a feed forward network.ACKNOWLEDGMENTSWe thank Evan Archer for normalizing flow code, and Xuexin Wei, Christian Andersson Naessethand Scott Linderman for helpful discussion. This work was supported by a Sloan Fellowship and aMcKnight Fellowship (JPC).8Published as a conference paper at ICLR 2017REFERENCESMartın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg SCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machinelearning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 , 2016.Adam L Berger, Vincent J Della Pietra, and Stephen A Della Pietra. A maximum entropy approachto natural language processing. Computational linguistics , 22(1):39–71, 1996.Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimationviak-nearest neighbour distances. arXiv preprint arXiv:1606.00304 , 2016.Dimitri P Bertsekas. Constrained optimization and Lagrange multiplier methods . Academic press,2014.Oleg Bondarenko. Estimation of risk-neutral densities using positive convolution approximation.Journal of Econometrics , 116(1):85–112, 2003.Jonathan Borwein, Rustum Choksi, and Pierre Mar ́echal. Probability distributions of assets inferredfrom option prices via the principle of maximum entropy. SIAM Journal on Optimization , 14(2):464–478, 2003.Peter W Buchen and Michael Kelly. The maximum entropy distribution of an asset inferred fromoption prices. Journal of Financial and Quantitative Analysis , 31(01):143–159, 1996.Anna Choromanska, Mikael Henaff, Michael Mathieu, G ́erard Ben Arous, and Yann LeCun. Theloss surfaces of multilayer networks. In AISTATS , 2015.Michael Collins, Robert E Schapire, and Yoram Singer. Logistic regression, adaboost and bregmandistances. Machine Learning , 48(1-3):253–285, 2002.John N Darroch and Douglas Ratcliff. Generalized iterative scaling for log-linear models. Theannals of mathematical statistics , pp. 1470–1480, 1972.Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. Inducing features of random fields.IEEE transactions on pattern analysis and machine intelligence , 19(4):380–393, 1997.Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXivpreprint arXiv:1605.08803 , 2016.Miroslav Dudik, Steven J Phillips, and Robert E Schapire. Performance guarantees for regularizedmaximum entropy density estimation. In International Conference on Computational LearningTheory , pp. 472–486. Springer, 2004.Stephen Figlewski. Estimating the implied risk neutral density. 2008.Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neuralnetworks. In Advances in Neural Information Processing Systems , pp. 262–270, 2015.Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch ̈olkopf, and Alexander Smola.A kernel two-sample test. Journal of Machine Learning Research , 13(Mar):723–773, 2012.Edwin T Jaynes. Information theory and statistical mechanics. Physical review , 106(4):620, 1957.Jiantao Jiao, Kartik Venkat, Yanjun Han, and Tsachy Weissman. Minimax estimation of functionalsof discrete distributions. IEEE Transactions on Information Theory , 61(5):2835–2885, 2015.Kenji Kawaguchi. Deep learning without poor local minima. In Advances In Neural InformationProcessing Systems , pp. 586–594, 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.9Published as a conference paper at ICLR 2017LF Kozachenko and Nikolai N Leonenko. Sample estimate of the entropy of a random vector.Problemy Peredachi Informatsii , 23(2):9–16, 1987.Yang Lu, Song-chun Zhu, and Ying Nian Wu. Learning frame models using cnn filters. In ThirtiethAAAI Conference on Artificial Intelligence , 2016.Robert Malouf. A comparison of algorithms for maximum entropy parameter estimation. In pro-ceedings of the 6th conference on Natural language learning-Volume 20 , pp. 1–7. Association forComputational Linguistics, 2002.Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXivpreprint arXiv:1610.03483 , 2016.Steven J Phillips, Robert P Anderson, and Robert E Schapire. Maximum entropy modeling ofspecies geographic distributions. Ecological modelling , 190(3):231–259, 2006.Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Ex-ponential expressivity in deep neural networks through transient chaos. In Advances In NeuralInformation Processing Systems , pp. 3360–3368, 2016.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the ex-pressive power of deep neural networks. arXiv preprint arXiv:1606.05336 , 2016.Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXivpreprint arXiv:1505.05770 , 2015.Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. On the convergence of bound op-timization algorithms. In Proceedings of the Nineteenth conference on Uncertainty in ArtificialIntelligence , pp. 509–516. Morgan Kaufmann Publishers Inc., 2002.Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. arXiv preprint arXiv:1603.03417 , 2016.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maxi-mizing quality and diversity in feed-forward stylization and texture synthesis. arXiv preprintarXiv:1701.02096 , 2017.Paul Valiant and Gregory Valiant. Estimating the unseen: improved estimators for entropy and otherproperties. In Advances in Neural Information Processing Systems , pp. 2157–2165, 2013.Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 ,2012.Song Chun Zhu, Yingnian Wu, and David Mumford. Filters, random fields and maximum entropy(frame): Towards a unified theory for texture modeling. International Journal of Computer Vision ,27(2):107–126, 1998.10Published as a conference paper at ICLR 2017A A UGMENTED LAGRANGIAN CONDITIONSWe give a more thorough discussion of the regularity conditions which ensure that the AugmentedLagrangian method will work. The goal of this section is simply to state these conditions and giveintuitive arguments about why some should hold in our case, not to attempt to prove that they indeedhold. The conditions (Bertsekas, 2014) are:There exists a strict local minimum of the optimization problem of Equation 5:If the function class Fis rich enough that it contains a true solver of the maximum entropyproblem, then a global optimum exists. Although not rigorous, we would expect that evenin the finite expressivity case that a global optimum remains, and indeed, recent theoreticalwork (Raghu et al., 2016; Poole et al., 2016) has gotten close to proving this.is a regular point of the optimization problem, that is, the rows of rR()are linearlyindependent:Again, this is not formal, but we should not expect this to cause any issues. This clearlydepends on the specific form of T, but the condition basically says that there should not beredundant constraints at the optimum, so if Tis reasonable this shouldn’t happen.H(p)andR()are twice continuously differentiable on a neighborhood around :This holds by the smoothness of the normalizing flows.y>r2L(;;0)y >0for everyy6= 0 such thatrR()y= 0, whereis the trueLagrange multiplier:This condition is harder to justify. It would appear it is just asking that the Lagrangian(not the augmented Lagrangian) be strictly convex in feasible directions, but it is actuallystronger than this and some simple functions might not satisfy the property. For example,if the function we are optimizing was x4and we had no constraints, the Lagrangian’sHessian would be 12x2, which is 0atx= 0thus not satisfying the condition. Importantly,these conditions are sufficient but not necessary, so even if this doesn’t hold the augmentedLagrangian method might work (it certainly would for x4). Because of this and the non-rigorous justifications of the first two conditions, we left these conditions for the appendixand relied instead on the empirical performance to justify that we are indeed recovering themaximum entropy distribution.If all of these conditions hold, the augmented Lagrangian (for large enough candclose enough to) has a unique optimum in a neighborhood around that is close to (as!it converges to) and its hessian at this optimum is positive-definite. Furthermore, k!. This implies that gra-dient descent (with the usual caveats of being started close enough to the solution and with the rightsteps) will correctly recover using the augmented Lagrangian method. This of course just guar-antees convergence to a local optimum, but if there are no additional assumptions such as convexity,it can be very hard to ensure that it is indeed a global optimum. Some recent research has attemptedto explain why optimization algorithms perform so well for neural networks (Choromanska et al.,2015; Kawaguchi, 2016), but we leave such attempts for our case for future research.B R ISK-NEUTRAL ASSET PRICEWe extract the risk-neutral asset price probability distribution based on option prices, an active andinteresting area for ME models. We give a brief introduction of the problem and refer interestedreaders to see Buchen & Kelly (1996) for a more detailed explanation. Denoting Stas the priceof an asset at time t, the buyer of a European call option for the stock that expires at time tewithstrike priceKwill receive a payoff of cK= (SteK)+= max(SteK;0)at timete. Underthe efficient market assumption, the risk-neutral probability distribution for the stock price at timetesatisfies:cK=D(te)Eq[(SteK)+]; (12)whereD(te)is the risk-free discount factor and qis the risk-neutral measure. We also have that,under the risk-neutral measure, the current stock price S0is the discounted expected value of Ste:S0=D(te)Eq(Ste): (13)11Published as a conference paper at ICLR 2017When given moptions that expire at time tewith strikesK1;:::;Kmand pricescK1;:::;cKm, we getmexpectation constraints on q(Ste)from Equation 12, together with Equation 13, we have m+ 1expectation constraints in total. With that partial knowledge we can approximate q(Ste), which ishelpful for understanding the market expected volatility and identify mispricing in option markets,etc.Inferring the risk-neutral density of asset price from a finite number of option prices is an importantquestion in finance and has been studied extensively (Buchen & Kelly, 1996; Borwein et al., 2003;Bondarenko, 2003; Figlewski, 2008). One popular method proposed by Buchen & Kelly (1996)estimates the probability density as the maximum entropy distribution satisfying the expectationconstraints and a positivity support constraint by fitting a Gibbs distribution, which results in apiece-wise linear log density:p(z)/exp(0z+mXi=1i(zKi)+)1(z0) (14)and optimize the distribution with numerical methods. Here we compare the performance of theMEFN algorithm with the method proposed in Buchen & Kelly (1996). To enforce the positivityconstraint we choose g(z) =eaz+b, whereaandbare additional parameters.We collect the closing price of European call options on Nov. 1 2016 for the stock AAPL (Appleinc.) that expires on te=Jun. 16 2017. We use m= 4of the options with highest trading volume astraining data and the rest as testing data. On the left panel of figure 4, we show the fitted risk-neutraldensity ofSteby MEFN (red line) with that of the fitted Gibbs distribution result (blue line). Wefind that while the distributions share similar location and variability, the distribution inferred byMEFN is smoother and arguably more plausible. In the middle panel we show a Q-Q plot of thequantiles of the MEFN and Gibbs distributions. We can see that the quantile pairs match the identityclosely, which should happen if both methods recovered the exact same distribution. This highlightsthe effectiveness of MEFN. There does exist a small mismatch in the tails: the distribution inferredby MEFN has slightly heavier tails. This mismatch is difficult to interpret: given that both the Gibbsand MEFN distributions are fit with option price data (and given that one can observe at most onevalue from the distribution, namely the stock price at expiration), it is fundamentally unclear whichdistribution is superior, in the sense of better capturing the true ME distribution’s tails. On the rightpanel we show the fitted option price for the two fitted distributions (for each strike price, we canrecover the fitted option price by Equation 12). We noted that the fitted option price and strike pricelines for both methods are very similar (they are mostly indiscernible on the right panel of figure4). We also compare the fitted performance on the test data by computing the root mean squareerror for the fitted and test data. We observe that the predictive performances for both methods arecomparable.0 50 100 150 200Price (dollars)0.0000.0050.0100.0150.0200.0250.0300.035DensityGibbsMEFN0 50 100 150 200 250Gibbs Quantiles050100150200250300MEFN Quantilesidentity0 50 100 150Strike price (dollars)20020406080100120Option price (dollars)Gibbs, RMSE=2.43MEFN, RMSE=2.39Training dataTesting dataFigure 4: Constructing risk-neutral measure from observed option price. Left panel : fitted risk-neutral measure by Gibbs and MEFN method. Middle panel : Q-Q plot for the quantiles from thedistributions on the left panel. Right panel : observed and fitted option price for different strikes.We note that for this specific application, there are practical concerns such as the microstructurenoise in the data and inefficiency in the market, etc. Applying a pre-processing procedure and incor-porating prior assumptions can be helpful for getting a more full-fledged method (see e.g. Figlewski(2008)). Here we mainly focus on illustrating the ability of the MEFN method to approximate theME distribution for non-typical distributions. Future work for this application includes fitting a risk-neutral distribution for multi-dimensional assets by incorporating dependence structure on assets.12Published as a conference paper at ICLR 2017C M ODELING IMAGES OF TEXTURESWe tried our texture modeling approach with many different textures, and although MEFN samplesdon’t always exhibit more visual diversity than samples obtained from the texture network, theyalways have more entropy as in figure 3. Figure 5 shows two positive examples, i.e. textures inwhich samples from MEFN do exhibit higher visual diversity than those from the texture network, aswell as a negative example, in which MEFN achieves less visual diversity than the texture network,regardless of the fact that MEFN samples do have larger entropy. We hypothesize that this curiousbehavior is due to the optimization achieving a local optimum in which the brick boundaries anddark brick locations are not diverse but the entropy within each brick is large. It should also benoted that among the experiments that we ran, this was the only negative example that we got, andthat slightly modifying the hyperparameters caused the issue to disappear.Input(positive example)Input(positive example)Input(negative example)Texture net (Ulyanov et al. (2016), less sample diversity)MEFN (ours, more sample diversity)Figure 5: MEFN and texture network samples.13
BJM4eQUEg
HJ1kmv9xx
ICLR.cc/2017/conference/-/paper373/official/review
{"title": "a figure-ground shape aware GAN mode for image generation", "rating": "6: Marginally above acceptance threshold", "review": "The paper proposes a model for image generation where the back-ground is generated first and then the foreground is pasted in by generating first a foregound mask and corresponding appearance, curving the appearance image using the mask and transforming the mask using predicted affine transform to paste it on top of the image. Using AMTurkers the authors verify their generated images are selected 68% of the time as being more naturally looking than corresponding images from a DC-GAN model that does not use a figure-ground aware image generator.\n\nThe segmentations masks learn to depict objects in very constrained datasets (birds) only, thus the method appears limited for general shape datasets, as the authors also argue in the paper. Yet, the architectural contributions have potential merit.\n\nIt would be nice to see if multiple layers of foreground (occluding foregrounds) are ever generated with this layered model or it is just figure-ground aware.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
["Jianwei Yang", "Anitha Kannan", "Dhruv Batra", "Devi Parikh"]
We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end-to-end manner with conventional gradient descent methods. The experiments demonstrate that LR-GAN can generate more natural images with objects that are more human recognizable than baseline GANs.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HJ1kmv9xx
https://openreview.net/pdf?id=HJ1kmv9xx
https://openreview.net/forum?id=HJ1kmv9xx&noteId=BJM4eQUEg
Published as a conference paper at ICLR 2017LR-GAN: L AYERED RECURSIVE GENERATIVE AD-VERSARIAL NETWORKS FOR IMAGE GENERATIONJianwei YangVirginia TechBlacksburg, V Ajw2yang@vt.eduAnitha KannanFacebook AI ResearchMenlo Park, CAakannan@fb.comDhruv Batraand Devi ParikhGeorgia Institute of TechnologyAtlanta, GAfdbatra, parikh g@gatech.eduABSTRACTWe present LR-GAN: an adversarial image generation model which takes scenestructure and context into account. Unlike previous generative adversarial net-works (GANs), the proposed GAN learns to generate image background and fore-grounds separately and recursively, and stitch the foregrounds on the backgroundin a contextually relevant manner to produce a complete natural image. For eachforeground, the model learns to generate its appearance, shape and pose. Thewhole model is unsupervised, and is trained in an end-to-end manner with gra-dient descent methods. The experiments demonstrate that LR-GAN can generatemore natural images with objects that are more human recognizable than DCGAN.1 I NTRODUCTIONGenerative adversarial networks (GANs) (Goodfellow et al., 2014) have shown significant promiseas generative models for natural images. A flurry of recent work has proposed improvements overthe original GAN work for image generation (Radford et al., 2015; Denton et al., 2015; Salimanset al., 2016; Chen et al., 2016; Zhu et al., 2016; Zhao et al., 2016), multi-stage image generationincluding part-based models (Im et al., 2016; Kwak & Zhang, 2016), image generation conditionedon input text or attributes (Mansimov et al., 2015; Reed et al., 2016b;a), image generation based on3D structure (Wang & Gupta, 2016), and even video generation (V ondrick et al., 2016).While the holistic ‘gist’ of images generated by these approaches is beginning to look natural, thereis clearly a long way to go. For instance, the foreground objects in these images tend to be deformed,blended into the background, and not look realistic or recognizable.One fundamental limitation of these methods is that they attempt to generate images without takinginto account that images are 2D projections of a 3D visual world, which has a lot of structures in it.This manifests as structure in the 2D images that capture this world. One example of this structureis that images tend to have a background, and foreground objects are placed in this background incontextually relevant ways.We develop a GAN model that explicitly encodes this structure. Our proposed model generates im-ages in a recursive fashion: it first generates a background, and then conditioned on the backgroundgenerates a foreground along with a shape (mask) and a pose (affine transformation) that togetherdefine how the background and foreground should be composed to obtain a complete image. Condi-tioned on this composite image, a second foreground and an associated shape and pose are generated,and so on. As a byproduct in the course of recursive image generation, our approach generates someobject-shape foreground-background masks in a completely unsupervised way, without access toanyobject masks for training. Note that decomposing a scene into foreground-background layers isa classical ill-posed problem in computer vision. By explicitly factorizing appearance and transfor-mation, LR-GAN encodes natural priors about the images that the same foreground can be ‘pasted’to the different backgrounds, under different affine transformations. According to the experiments,the absence of these priors result in degenerate foreground-background decompositions, and thusalso degenerate final composite images.Work was done while visiting Facebook AI Research.1Published as a conference paper at ICLR 2017Figure 1: Generation results of our model on CUB-200 (Welinder et al., 2010). It generates imagesin two timesteps. At the first timestep, it generates background images, while generates foregroundimages, masks and transformations at the second timestep. Then, they are composed to obtain thefinal images. From top left to bottom right (row major), the blocks are real images, generatedbackground images, foreground images, foreground masks, carved foreground images, carved andtransformed foreground images, final composite images, and their nearest neighbor real images inthe training set. Note that the model is trained in a completely unsupervised manner.We mainly evaluate our approach on four datasets: MNIST-ONE (one digit) and MNIST-TWO (twodigits) synthesized from MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009) andCUB-200 (Welinder et al., 2010). We show qualitatively (via samples) and quantitatively (via evalu-ation metrics and human studies on Amazon Mechanical Turk) that LR-GAN generates images thatglobally look natural andcontain clear background and object structures in them that are realisticand recognizable by humans as semantic entities. An experimental snapshot on CUB-200 is shownin Fig. 1. We also find that LR-GAN generates foreground objects that are contextually relevant tothe backgrounds (e.g., horses on grass, airplanes in skies, ships in water, cars on streets, etc.). Forquantitative comparison, besides existing metrics in the literature, we propose two new quantitativemetrics to evaluate the quality of generated images. The proposed metrics are derived from the suffi-cient conditions for the closeness between generated image distribution and real image distribution,and thus supplement existing metrics.2 R ELATED WORKEarly work in parametric texture synthesis was based on a set of hand-crafted features (Portilla &Simoncelli, 2000). Recent improvements in image generation using deep neural networks mainlyfall into one of the two stochastic models: variational autoencoders (V AEs) (Kingma et al., 2016)and generative adversarial networks (GANs) (Goodfellow et al., 2014). V AEs pair a top-down prob-abilistic generative network with a bottom up recognition network for amortized probabilistic infer-ence. Two networks are jointly trained to maximize a variational lower bound on the data likelihood.GANs consist of a generator and a discriminator in a minmax game with the generator aiming tofool the discriminator with its samples with the latter aiming to not get fooled.Sequential models have been pivotal for improved image generation using variational autoencoders:DRAW (Gregor et al., 2015) uses attention based recurrence conditioning on the canvas drawn sofar. In Eslami et al. (2016), a recurrent generative model that draws one object at a time to thecanvas was used as the decoder in V AE. These methods are yet to show scalability to natural images.Early compelling results using GANs used sequential coarse-to-fine multiscale generation and class-conditioning (Denton et al., 2015). Since then, improved training schemes (Salimans et al., 2016)and better convolutional structure (Radford et al., 2015) have improved the generation results using2Published as a conference paper at ICLR 2017GANs. PixelRNN (van den Oord et al., 2016) is also recently proposed to sequentially generates apixel at a time, along the two spatial dimensions.In this paper, we combine the merits of sequential generation with the flexibility of GANs. Ourmodel for sequential generation imbibes a recursive structure that more naturally mimics imagecomposition by inferring three components: appearance, shape, and pose. One closely related workcombining recursive structure with GAN is that of Im et al. (2016) but it does not explicitly modelobject composition and follows a similar paradigm as by Gregor et al. (2015). Another closely re-lated work is that of Kwak & Zhang (2016). It combines recursive structure and alpha blending.However, our work differs in three main ways: (1) We explicitly use a generator for modeling theforeground poses. That provides significant advantage for natural images, as shown by our ablationstudies; (2) Our shape generator is separate from the appearance generator. This factored repre-sentation allows more flexibility in the generated scenes; (3) Our recursive framework generatessubsequent objects conditioned on the current and previous hidden vectors, andpreviously gener-ated object. This allows for explicit contextual modeling among generated elements in the scene.See Fig. 17 for contextually relevant foregrounds generated for the same background, or Fig. 6 formeaningful placement of two MNIST digits relative to each.Models that provide supervision to image generation using conditioning variables have also beenproposed: Style/Structure GANs (Wang & Gupta, 2016) learns separate generative models for styleand structure that are then composed to obtain final images. In Reed et al. (2016a), GAN basedimage generation is conditioned on text and the region in the image where the text manifests, spec-ified during training via keypoints or bounding boxes. While not the focus of our work, the modelproposed in this paper can be easily extended to take into account these forms of supervision.3 P RELIMINARIES3.1 G ENERATIVE ADVERSARIAL NETWORKSGenerative Adversarial Networks (GANs) consist of a generator Gand a discriminator Dthat aresimultaneously trained with competing goals: The generator Gis trained to generate samples thatcan ‘fool’ a discriminator D, while the discriminator is trained to classify its inputs as either real(coming from the training dataset) or fake (coming from the samples of G). This competition leadsto a minmax formulation with a value function:minGmaxDExpdata (x)[log(D(x;D))] + E zpz(z)[log(1D(G(z;G);D))]; (1)where zis a random vector from a standard multivariate Gaussian or a uniform distribution pz(z),G(z;G)mapszto the data space, D(x)is the probability that xis real estimated by D. Theadvantage of the GANs formulation is that it lacks an explicit loss function and instead uses thediscriminator to optimize the generative model. The discriminator, in turn, only cares whether thesample it receives is on the data manifold, and not whether it exactly matches a particular trainingexample (as opposed to losses such as MSE). Hence, the discriminator provides a gradient signalonly when the generated samples do not lie on the data manifold so that the generator can readjustits parameters accordingly. This form of training enables learning the data manifold of the trainingset and not just optimizing to reconstruct the dataset, as in autoencoder and its variants.While the GANs framework is largely agnostic to the choice of GandD, it is clear that generativemodels with the ‘right’ inductive biases will be more effective in learning from the gradient infor-mation (Denton et al., 2015; Im et al., 2016; Gregor et al., 2015; Reed et al., 2016a; Yan et al., 2015).With this motivation, we propose a generator that models image generation via a recurrent process– in each time step of the recurrence, an object with its own appearance and shape is generated andwarped according to a generated pose to compose an image in layers.3.2 L AYERED STRUCTURE OF IMAGEAn image taken of our 3D world typically contains a layered structure. One way of representing animage layer is by its appearance and shape. As an example, an image xwith two layers, foregroundfand background bmay be factorized as:x=fm+b(1m); (2)3Published as a conference paper at ICLR 2017where mis the mask depicting the shapes of image layers, and the element wise multiplicationoperator. Some existing methods assume the access to the shape of the object either during training(Isola & Liu, 2013) or both at train and test time (Reed et al., 2016a; Yan et al., 2015). Representingimages in layered structure is even straightforward for video with moving objects (Darrell & Pent-land, 1991; Wang & Adelson, 1994; Kannan et al., 2005). V ondrick et al. (2016) generates videosby separately generating a fixed background and moving foregrounds. A similar way of generatingsingle image can be found in Kwak & Zhang (2016).Another way is modeling the layered structure with object appearance and pose as:x=ST(f;a) +b; (3)where fandbare foreground and background, respectively; ais the affine transformation; STisthe spatial transformation operator. Several works fall into this group (Roux et al., 2011; Huang &Murphy, 2015; Eslami et al., 2016). In Huang & Murphy (2015), images are decomposed into layersof objects with specific poses in a variational autoencoder framework, while the number of objects(i.e., layers) is adaptively estimated in Eslami et al. (2016).To contrast with these works, LR-GAN uses a layered composition, and the foreground layers si-multaneously model all three dominant factors of variation: appearance f, shape mand pose a. Wewill elaborate it in the following section.4 L AYERED RECURSIVE GAN (LR-GAN)The basic structure of LR-GAN is similar to GAN: it consists of a discriminator and a generator thatare simultaneously trained using the minmax formulation of GAN, as described in x.3.1. The keyinnovation of our work is the layered recursive generator, which is what we describe in this section.The generator in LR-GAN is recursive in that the image is constructed recursively using a recurrentnetwork. Layered in that each recursive step composes an object layer that is ‘pasted’ on the imagegenerated so far. Object layer at timestep tis parameterized by the following three constituents –‘canonical’ appearance ft, shape (or mask) mt, and pose (or affine transformation) atfor warpingthe object before pasting in the image composition.Fig. 2 shows the architecture of the LR-GAN with the generator architecture unrolled for generatingbackground x0(:=xb) and foreground x1andx2. At each time step t, the generator composes thenext image xtvia the following recursive computation:xt=ST(mt;at)|{z}affine transformed maskST(ft;at)|{z}affine transformed appearance+ (1ST(mt;at))xt1|{z}pasting on image composed so far;8t2[1;T](4)whereST(;at)is a spatial transformation operator that outputs the affine transformed version ofwithatindicating parameters of the affine transformation.Since our proposed model has an explicit transformation variable atthat is used to warp the object,it can learn a canonical object representation that can be re-used to generate scenes where the ob-ject occurs as mere transformations of it, such as different scales or rotations. By factorizing theappearance, shape and pose, the object generator can focus on separately capturing regularities inthese three factors that constitute an object. We will demonstrate in our experiments that removingthese factorizations from the model leads to its spending capacity in variability that may not solelybe about the object in Section 5.5 and 5.6.4.1 D ETAILS OF GENERATOR ARCHITECTUREFig. 2 shows our LR-GAN architecture in detail – we use different shapes to indicate different kindsof layers (convolutional, fractional convolutional, (non)linear, etc), as indicated by the legend. Ourmodel consists of two main pieces – a background generator Gband a foreground generator Gf.GbandGfdo not share parameters with each other. Gbcomputation happens only once, while Gfisrecurrent over time, i.e., all object generators share the same parameters. In the following, we willintroduce each module and connections between them.Temporal Connections . LR-GAN has two kinds of temporal connections – informally speaking,one on ‘top’ and one on ‘bottom’. The ‘top’ connections perform the act of sequentially ‘pasting’4Published as a conference paper at ICLR 2017G"#G$LSTMLSTMG"%G"&G'T"G"#LSTMG"%G"&G'T"DP"#E"#E",CCSSFractionalConvolutionalLayersConvolutionalLayers(Non)linearembeddinglayersandothersx$f12f2m2m42m45f15f5m5SpatialSamplerCompositorx2x6Realsamplez8z2z5Figure 2: LR-GAN architecture unfolded to three timesteps. It mainly consists of one backgroundgenerator, one foreground generator, temporal connections and one discriminator. The meaning ofeach component is explained in the legend.object layers (Eqn. 4). The ‘bottom’ connections are constructed by a LSTM on the noise vectorsz0;z1;z2. Intuitively, this noise-vector-LSTM provides information to the foreground generatorabout what else has been generated in past. Besides, when generating multiple objects, we use apooling layer Pcfand a fully-connected layer Ecfto extract the information from previous generatedobject response map. By this way, the model is able to ‘see’ previously generated objects.Background Generator . The background generator Gbis purposely kept simple. It takes the hiddenstate of noise-vector-LSTM h0las the input and passes it to a number of fractional convolutionallayers (also called ‘deconvolution’ layer in some papers) to generate images at its end. The outputof background generator xbwill be used as the canvas for the following generated foregrounds.Foreground Generator . The foreground generator Gfis used to generate an object with appearanceand shape. Correspondingly, Gfconsists of three sub-modules, Gcf, which is a common ‘trunk’whose outputs are shared by GifandGmf.Gifis used to generate the foreground appearance ft,whileGmfgenerates the mask mtfor the foreground. All three sub-modules consists of one ormore fractional convolutional layers combined with batch-normalization and nonlinear layers. Thegenerated foreground appearance and mask have the same spatial size as the background. The topofGmfis a sigmoid layer in order to generate one channel mask whose values range in (0;1).Spatial Transformer . To spatially transform foreground objects, we need to estimate the trans-formation matrix. As in Jaderberg et al. (2015), we predict the affine transformation matrix with alinear layerTfthat has six-dimensional outputs. Then based on the predicted transformation matrix,we use a grid generator Ggto generate the corresponding sampling coordinates in the input for eachlocation at the output. The generated foreground appearance and mask share the same transforma-tion matrix, and thus the same sampling grid. Given the grid, the sampler Swill simultaneouslysample the ftandmtto obtain ^ftand^mt, respectively. Different from Jaderberg et al. (2015),our sampler here normally performs downsampling, since the the foreground typically has smallersize than the background. Pixels in ^ftand^mtthat are from outside the extent of ftandmtare setto zero. Finally, ^ftand^mtare sent to the compositor Cwhich combines the canvas xt1and^ftthrough layered composition with blending weights given by ^mt(Eqn. 4).Pseudo-code for our approach and detailed model configuration are provided in the Appendix.5Published as a conference paper at ICLR 20174.2 N EWEVALUATION METRICSSeveral metrics have been proposed to evaluate GANs, such as Gaussian parzen window (Good-fellow et al., 2014), Generative Adversarial Metric (GAM) (Im et al., 2016) and Inception Score(Salimans et al., 2016). The common goal is to measure the similarity between the generated datadistributionPg(x) =G(z;z)and the real data distribution P(x). Most recently, Inception Scorehas been used in several works (Salimans et al., 2016; Zhao et al., 2016). However, it is an assymetricmetric and could be easily fooled by generating centers of data modes.In addition to these metrics, we present two new metrics based on the following intuition – a suf-ficient (but not necessary) condition for closeness of Pg(x)andP(x)is closeness of Pg(xjy)andP(xjy), i.e., distributions of generated data and real data conditioned on all possible variables ofinteresty, e.g., category label. One way to obtain this variable of interest yis via human annotation.Specifically, given the data sampled from Pg(x)andP(x), we ask people to label the category of thesamples according to some rules. Note that such human annotation is often easier than comparingsamples from the two distributions (e.g., because there is no 1:1 correspondence between samplesto conduct forced-choice tests).After the annotations, we need to verify whether the two distributions are similar in each category.Clearly, directly comparing the distributions Pg(xjy)andP(xjy)may be as difficult as compar-ingPg(x)andP(x). Fortunately, we can use Bayes rule and alternatively compare Pg(yjx)andP(yjx), which is a much easier task. In this case, we can simply train a discriminative model onthe samples from Pg(x)andP(x)together with the human annotations about categories of thesesamples. With a slight abuse of notation, we use Pg(yjx)andP(yjx)to denote probability outputsfrom these two classifiers (trained on generated samples vs trained on real samples). We can thenuse these two classifiers to compute the following two evaluation metrics:Adversarial Accuracy: Computes the classification accuracies achieved by these two classifiers ona validation set, which can be the training set or another set of real images sampled from P(x). IfPg(x)is close toP(x), we expect to see similar accuracies.Adversarial Divergence: Computes the KL divergence between Pg(yjx)andP(yjx). The lowerthe adversarial divergence, the closer two distributions are. The low bound for this metric is exactlyzero, which means Pg(yjx) =P(yjx)for all samples in the validation set.As discussed above, we need human efforts to label the real and generated samples. Fortunately, wecan further simplify this. Based on the labels given on training data, we split the training data intocategories, and train one generator for each category. With all these generators, we generate samplesof all categories. This strategy will be used in our experiments on the datasets with labels given.5 E XPERIMENTWe conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (LeCun et al.,1998); 2) CIFAR-10 (Krizhevsky & Hinton, 2009); 3) CUB-200 (Welinder et al., 2010). To addvariability to the MNIST images, we randomly scale (factor of 0.8 to 1.2) and rotate ( 4to4) thedigits and then stitch them to 4848uniform backgrounds with random grayscale value between[0, 200]. Images are then rescaled back to 3232. Each image thus has a different backgroundgrayscale value and a different transformed digit as foreground. We rename this sythensized datasetasMNIST-ONE (single digit on a gray background). We also synthesize a dataset MNIST-TWOcontaining two digits on a grayscale background. We randomly select two images of digits andperform similar transformations as described above, and put one on the left and the other on theright side of a 7878gray background. We resize the whole image to 6464.We develop LR-GAN based on open source code1. We assume the number of objects is known.Therefore, for MNIST-ONE, MNIST-TWO, CIFAR-10, and CUB-200, our model has two, three,two, and two timesteps, respectively. Since the size of foreground object should be smaller thanthat of canvas, we set the minimal allowed scale2in affine transforamtion to be 1.2 for all datasetsexcept for MNIST-TWO, which is set to 2 (objects are smaller in MNIST-TWO). In LR-GAN, the1https://github.com/soumith/dcgan.torch2Scale corresponds to the size of the target canvas with respect to the object – the larger the scale, the largerthe canvas, and the smaller the relative size of the object in the canvas. 1 means the same size as the canvas.6Published as a conference paper at ICLR 2017Figure 3: Generated images on CIFAR-10 based on our model.Figure 4: Generated images on CUB-200 based on our model.background generator and foreground generator have similar architectures. One difference is thatthe number of channels in the background generator is half of the one in the foreground generator.We compare our results to that of DCGAN (Radford et al., 2015). Note that LR-GAN withoutLSTM at the first timestep corresponds exactly to the DCGAN. This allows us to run controlledexperiments. In both generator and discriminator, all the (fractional) convolutional layers have 44filter size with stride 2. As a result, the number of layers in the generator and discriminatorautomatically adapt to the size of training images. Please see the Appendix (Section 6.2) for detailsabout the configurations. We use three metrics for quantitative evaluation, including Inception Score(Salimans et al., 2016) and the proposed Adversarial Accuracy, Adversarial Divergence. Note thatwe report two versions of Inception Score. One is based on the pre-trained Inception net, and theother one is based on the pre-trained classifier on the target datasets.5.1 Q UALITATIVE RESULTSIn Fig. 3 and 4, we show the generated samples for CIFAR-10 and CUB-200, respectively. MNISTresults are shown in the next subsection. As we can see from the images, the compositional natureof our model results in the images being free of blending artifacts between backgrounds and fore-grounds. For CIFAR-10, we can see the horses and cars with clear shapes. For CUB-200, the birdshapes tend to be even sharper.5.2 MNIST-ONE AND MNIST-TWOWe now report the results on MNIST-ONE and MNIST-TWO. Fig. 5 shows the generation results ofour model on MNIST-ONE. As we can see, our model generates the background and the foregroundin separate timestep, and can disentagle the foreground digits from background nearly perfectly.Though initial values of the mask randomly distribute in the range of (0, 1), after training, the masksare nearly binary and accurately carve out the digits from the generated foreground. More results onMNIST-ONE (including human studies) can be found in the Appendix (Section 6.3).Fig. 6 shows the generation results for MNIST-TWO. Similarly, the model is also able to generatebackground and the two foreground objects separately. The foreground generator tends to generatea single digit at each timestep. Meanwhile, it captures the context information from the previoustime steps. When the first digit is placed to the left side, the second one tends to be placed on theright side, and vice versa.7Published as a conference paper at ICLR 2017Figure 5: Generation results of our model on MNIST-ONE. From left to right, the image blocks arereal images, generated background images, generated foreground images, generated masks and finalcomposite images, respectively.Figure 6: Generation results of our model on MNIST-TWO. From top left to bottom right (rowmajor), the image blocks are real images, generated background images, foreground images andmasks at the second timestep, composite images at the second time step, generated foregroundimages and masks at the third timestep and the final composite images, respectively.5.3 CUB-200We study the effectiveness of our model trained on the CUB-200 bird dataset. In Fig. 1, we haveshown a random set of generated images, along with the intermediate generation results of the model.While being completely unsupervised , the model, for a large fraction of the samples, is able toFigure 7: Matched pairs of generated images based on DCGAN and LR-GAN, respectivedly. Theodd columns are generated by DCGAN, and the even columns are generated by LR-GAN. Theseare paired according to the perfect matching based on Hungarian algorithm.8Published as a conference paper at ICLR 2017Figure 8: Qualitative comparison on CIFAR-10. Top three rows are images generated by DCGAN;Bottom three rows are by LR-GAN. From left to right, the blocks display generated images withincreasing quality level as determined by human studies.successfully disentangle the foreground and the background. This is evident from the generatedbird-like masks.We do a comparative study based on Amazon Mechanical Turk (AMT) between DCGAN and LR-GAN to quantify relative visual quality of the generated images. We first generated 1000 samplesfrom both the models. Then, we performed perfect matching between the two image sets usingthe Hungarian algorithm on L2norm distance in the pixel space. This resulted in 1000 imagepairs. Some examplar pairs are shown in Fig. 7. For each image pair, 9 judges are asked to choosethe one that is more realistic. Based on majority voting, we find that our generated images areselected 68.4% times, compared with 31.6% times for DCGAN. This demonstrates that our modelhas generated more realistic images than DCGAN. We can attribute this difference to our model’sability to generate foreground separately from the background, enabling stronger edge cues.5.4 CIFAR-10We now qualitatively and quantitatively evaluate our model on CIFAR-10, which contains multipleobject categories and also various backgrounds.Comparison of image generation quality: We conduct AMT studies to compare the fidelity ofimage generation. Towards this goal, we generate 1000 images from DCGAN and LR-GAN, re-spectively. We ask 5 judges to label each image to either belong to one of the 10 categories or as‘non recognizable’ or ‘recognizable but not belonging to the listed categories’. We then assign eachimage a quality level between [0,5] that captures the number of judges that agree with the majoritychoice. Fig. 8 shows the images generated by both approaches, ordered by increasing quality level.We merge images at quality level 0 (all judges said non-recognizable) and 1 together, and similarlyimages at level 4 and 5. Visually, the generated samples by our model have clearer boundaries andobject structures. We also computed the fraction of non-recognizable images: Our model had a 10%absolute drop in non-recognizability rate (67.3% for ours vs. 77.7% for DCGAN). For reference,11.4% of real CIFAR images were categorized as non-recognizable. Fig. 9 shows more generated(intermediate) results of our model.Quantitative evaluation on generators: We evaluate the generators based on three metrics: 1)Inception Score; 2) Adversarial Accuracy; 3) Adversarial Divergence. To obtain a classifier modelfor evaluation, we remove the top layer in the discriminator used in our model, and then appendtwo fully connected layers on the top of it. We train this classifier using the training samples ofCIFAR-10 based on the annotations. Following Salimans et al. (2016), we generated 50,000 imagesTable 1: Quantitative comparison between DCGAN and LR-GAN on CIFAR-10.Training Data Real Images DCGAN OursInception Scorey11.180.18 6.64 0.14 7.17 0.07Inception Scoreyy7.230.09 5.69 0.07 6.11 0.06Adversarial Accuracy 83.33 0.08 37.81 0.02 44.22 0.08Adversarial Divergence 0 7.58 0.04 5.57 0.06yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.9Published as a conference paper at ICLR 2017Figure 9: Generation results of our model on CIFAR-10. From left to right, the blocks are: gener-ated background images, foreground images, foreground masks, foreground images carved out bymasks, carved foregrounds after spatial transformation, final composite images and nearest neighbortraining images to the generated images.Figure 10: Category specific generation results of our model on CIFAR-10 categories of horse, frog,and cat (top to bottom). The blocks from left to right are: generated background images, foregroundimages, foreground masks, foreground images carved out by masks, carved foregrounds after spatialtransformation and final composite images.based on DCGAN and LR-GAN, repsectively. We compute two types of Inception Scores. Thestandard Inception Score is based on the Inception net as in Salimans et al. (2016), and the contex-tual Inception Score is based on our trained classifier model. To distinguish, we denote the standardone as ‘Inception Scorey’, and the contextual one as ‘Inception Scoreyy’. To obtain the AdversarialAccuracy and Adversarial Divergence scores, we train one generator on each of 10 categories forDCGAN and LR-GAN, respectively. Then, we use these generators to generate samples of differentcategories. Given these generated samples, we train the classifiers for DCGAN and LR-GAN sepa-rately. Along with the classifier trained on the real samples, we compute the Adversarial Accuracy10Published as a conference paper at ICLR 2017and Adversarial Divergence on the real training samples. In Table 1, we report the Inception Scores,Adversarial Accuracy and Adversarial Divergence for comparison. We can see that our model out-performs DCGAN across the board. To point out, we obtan different Inception Scores based ondifferent classifier models, which indicates that the Inception Score varies with different models.Quantitative evaluation on discriminators: We evaluate the discriminator as an extractor for deeprepresentations. Specifically, we use the output of the last convolutional layer in the discriminatoras features. We perform a 1-NN classification on the test set given the full training set. Cosinesimilarity is used as the metric. On the test set, our model achieves 62.09% 0.01% compared toDCGAN’s 56.05% 0.02%.Contextual generation: We also show the efficacy of our approach to generate diverse foregroundsconditioned on fixed background. The results in Fig. 17 in Appendix showcase that the foregroundgenerator generates objects that are compatible with the background. This indicates that the modelhas captured contextual dependencies between the image layers.Category specific models: The objects in CIFAR-10 exhibit huge variability in shapes. That canpartly explain why some of the generated shapes are not as compelling in Fig. 9. To test this hy-pothesis, we reuse the generators trained for each of 10 categories used in our metrics to obtain thegeneration results. Fig. 10 shows results for categories ‘horse’, ‘frog’ and ‘cat’. We can see that themodel is now able to generate object-specific appearances and shapes, similar in vein to our resultson the CUB-200 dataset.5.5 I MPORTANCE OF TRANSFORMATIONSFigure 11: Generation results from an ablated LR-GAN model without affine transformations. Fromtop to bottom, the block rows correspond to different datasets: MNIST-ONE, CUB-200, CIFAR-10.From left to right, the blocks show generated background images, foreground images, foregroundmasks, and final composite images. For comparison, the rightmost column block shows final gener-ated images from a non-ablated model with affine transformations.Fig. 11 shows results from an ablated model without affine transformations in the foreground layers,and compares the results with the full model that does include these transformations. We note thatone significant problem emerges that the decompositions are degenerate, in the sense that the modelis unable to break the symmetry between foreground and background layers, often generating objectappearances in the model’s background layer and vice versa. For CUB-200, the final generated im-ages have some blendings between foregrounds and backgrounds. This is particularly the case for11Published as a conference paper at ICLR 2017Figure 12: Generation results from an ablated LR-GAN model without mask generator. The blockrows correspond to different datasets (from top to bottom: MNIST-ONE, CUB-200, CIFAR-10).From left to right, the blocks show generated background images, foreground images, transformedforeground images, and final composite images. For comparison, the rightmost column block showsfinal generated images from a non-ablated model with mask generator.those images without bird-shape masks. For CIFAR-10, a number of generated masks are inverted.In this case, the background images are carved out as the foreground objects. The foreground gener-ator takes almost all the duty to generate the final images, which make it harder to generate imagesas clear as the model with transformation. From these comparisons, we qualitatively demonstratethe importance of modeling transformations in the foreground generation process. Another merit ofusing transformation is that the intermediate outputs of the model are more interpretable and faciliateto the downstreaming tasks, such as scene paring, which is demonstrated in Section 6.8.5.6 I MPORTANCE OF SHAPESWe perform another ablation study by removing the mask generator to understand the importanceof modeling object shapes. In this case, the generated foreground is simply pasted on top of thegenerated background after being transformed. There is no alpha blending between the foregroundsand backgrounds. The generation results for three datasets, MNIST-ONE, CUB-200, CIFAR-10 areshown in Fig. 12. As we can see, though the model works well for the generation of MNIST-ONE, itfails to generate reasonable images across the other datasets. Particularly, the training does not evenconverge for CUB-200. Based on these results, we qualitatively demonstrate that mask generator inour model is fairly important to obtain plausible results, especially for realistic images.REFERENCESXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Trevor Darrell and Alex Pentland. Robust estimation of a multi-layered motion representation. IEEEWorkshop on Visual Motion , 1991.12Published as a conference paper at ICLR 2017Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geof-frey E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. CoRR ,abs/1603.08575, 2016.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: Arecurrent neural network for image generation. arXiv preprint arXiv:1502.046239 , 2015.Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild:A database for studying face recognition in unconstrained environments. Technical Report 07-49,University of Massachusetts, Amherst, October 2007.Jonathan Huang and Kevin Murphy. Efficient inference in occlusion-aware generative models ofimages. CoRR , abs/1511.06362, 2015.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Phillip Isola and Ce Liu. Scene collaging: Analysis and synthesis of natural images with semanticlayers. In IEEE International Conference on Computer Vision , pp. 3048–3055, 2013.Max Jaderberg, Karen Simonyan, Andrew Zisserman, and koray kavukcuoglu. Spatial transformernetworks. In Advances in Neural Information Processing Systems 28 , pp. 2017–2025, 2015.Anitha Kannan, Nebojsa Jojic, and Brendan Frey. Generative model for layers of appearance anddeformation. AISTATS , 2005.Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. arXiv preprint arXiv:1606.04934 , 2016.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hanock Kwak and Byoung-Tak Zhang. Generating images part by part with composite generativeadversarial networks. arXiv preprint arXiv:1607.05387 , 2016.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating imagesfrom captions with attention. arXiv preprint arXiv:1511.02793 , 2015.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn-ing what and where to draw. arXiv preprint arXiv:1610.02454 , 2016a.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 , 2016b.Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model ofimages by factoring appearance and shape. Neural Computation , 23:593–650, 2011.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.13Published as a conference paper at ICLR 2017A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.CoRR , abs/1601.06759, 2016.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.arXiv preprint arXiv:1609.02612 , 2016.John Wang and Edward Adelson. Representing moving images with layers. IEEE Transactions onImage Processing , 1994.Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar-ial networks. arXiv preprint arXiv:1603.05631 , 2016.P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSDBirds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional imagegeneration from visual attributes. CoRR , abs/1512.00570, 2015.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.Jun-Yan Zhu, Philipp Kr ̈ahenb ̈uhl, Eli Shechtman, and Alexei A Efros. Generative visual manipu-lation on the natural image manifold. In European Conference on Computer Vision , pp. 597–613.Springer, 2016.6 A PPENDIX6.1 A LGORITHMAlgo. 1 illustrates the generative process in our model. g(?)evaluates the function gat?.is acomposition operator that composes its operands so that fg(?) =f(g(?)).Algorithm 1 Stochastic Layered Recursive Image Generation1:z0N(0;I)2:x0=Gb(z0) .background generator3:h0l 04:c0l 05:fort2[1T]do6: ztN(0;I)7: htl,ctl LSTM([ zt,ht1l,ct1l]) .pass through LSTM8: ift = 1 then9: yt htl10: else11: yt Elf([htlht1f]) .pass through non-linear embedding layers Elf12: end if13: st Gcf(yt) .predict shared cube for GifandGmf14:at Tf(yt) .object transformation15: ft Gif(st) .generate object appearance16: mt Gmf(st) .generate object shape17: htf EcfPcf(st) .predict shared represenation embedding18: xt ST(mt;at)ST(ft;at) + (1ST(mt;at))xt119:end for6.2 M ODEL CONFIGURATIONSTable 2 lists the information and model configuration for different datasets. The dimensions ofrandom vectors and hidden vectors are all set to 100. We also compare the number of parameters inDCGAN and LR-GAN. The numbers before ‘/’ are our model, after ‘/’ are DCGAN. Based on thesame notation used in (Zhao et al., 2016), the architectures for the different datasets are:14Published as a conference paper at ICLR 2017Table 2: Information and model configurations on different datasets.Dataset MNIST-ONE MNIST-TWO CIFAR-10 CUB-200Image Size 32 64 32 64#Images 60,000 60,000 50,000 5,994#Timesteps 2 3 2 2#Parameters 5.25M/4.11M 7.53M/6.33M 5.26M/4.11M 27.3M/6.34MMNIST-ONE: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1MNIST-TWO: Gb: (256)4c-(128)4c2s-(64)4c2s-(32)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s-(512)4p4s-1CUB-200: Gb: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (128)4c2s-(256)4c2s-(512)4c2s-(1024)4c2s-(1024)4p4s-1CIFAR-10: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s;Gif: (3)4c2s; Gmf: (1)4c2sD: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-16.3 R ESULTS ON MNIST-ONEWe conduct human studies on generation results on MNIST-ONE. Specifically, we generate 1,000images using both LR-GAN and DCGAN. As references, we also include 1000 real images. Thenwe ask the users on AMT to label each image to be one of the digits (0-9). We also provide theman option ‘non recognizable’ in case the generated image does not seem to contain a digit. Eachimage was judged by 5 unique workers. Similar to CIFAR-10, if an image is recognized to be thesame digit by all 5 users, it is assigned to quality level 5. If it is not recognizable according to allusers, it is assigned to quality level 0. Fig. 13 (left) shows the number of images assigned to all sixquality levels. Compared to DCGAN, our model generated more samples with high quality levels.As expected, the real images have many samples with high quality levels. In Fig. 13 (right), we showthe number of images that are recognized to each digit category (0-9). For qualitative comparison,we show examplar images at each quality level in Fig. 14. From left to right, the quality levelincreases from 0 to 5. As expected, the images with higher quality level are more clear.For quantitative evaluation, we use the same way as for CIFAR-10. The classifier model used forcontextual Inception Score is trained based on the training set. We generate 60,000 samples basedon DCGAN and LR-GAN for evaluation, respectively. To obtain the Adversarial Accuracy andAdversarial Divergence, we first train 10 generators for 10 digit categories separately, and then usethe generated samples to train the classifier. As shown in Table 3, our model has higher scores thanDCGAN on both standard and contextual Inception Score. Also, our model has a slightly higherFigure 13: Statistics of annotations in human studies on MNIST-ONE. Left: distribution of qualitylevel; Right: distribution of recognized digit categories.15Published as a conference paper at ICLR 2017Figure 14: Qualitative comparison on MNIST-ONE. Top three rows are samples generated by DC-GAN. Bottom three rows are samples generated by LR-GAN. The quality level increases from leftto right as determined via human studies.Table 3: Quantitative comparison on MNIST-ONE.Training Data Real Images DCGAN OursInception Scorey1.830.01 2.03 0.01 2.06 0.01Inception Scoreyy9.150.04 6.42 0.03 7.15 0.04Adversarial Accuracy 95.22 0.25 26.12 0.07 26.61 0.06Adversarial Divergence Score 0 8.47 0.03 8.39 0.04yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.adversarial accuracy, and lower adversarial divergence than DCGAN. We find that the all threeimage sets have low standard Inception Scores. This is mainly because the Inception net is trainedon ImageNet, which has a very different data distribution from the MNIST dataset. Based on this,we argue that the standard Inception Score is not suitable for some image datasets.6.4 M ORE RESULTS ON CUB-200In this experiment, we reduce the minimal allowed object scale to 1.1, which allows the model togenerate larger foreground objects. The results are shown in Fig. 15. Similar to the results when theconstraint is 1.2, the crisp bird-like masks are generated automatically by our model.Figure 15: Generation results of our model on CUB-200 when setting minimal allowed scale to1.1. From left to right, the blocks show the generated background images, foreground images,foreground masks, foreground images carved out by masks, carved foreground images after spatialtransformation. The sixth and seventh blocks are final composite images and the nearest neighborreal images.16Published as a conference paper at ICLR 20176.5 M ORE RESULTS ON CIFAR-106.5.1 Q UALITATIVE RESULTSIn Fig. 16, we show more results on CIFAR-10 when setting minimal allowed object scale to 1.1.The rightmost column block also shows the training images that are closest to the generated images(cosine similarity in pixel space). We can see our model does not memorize the training data.Figure 16: Generation results of our model on CIFAR-10 with minimal allowed scale be 1.1, Fromleft to right, the layout is same to Fig. 15.6.5.2 W ALKING IN THE LATENT SPACESimilar to DCGAN, we also show results by walking in the latent space. Note that our model hastwo or more inputs. So we can walk along any of them or their combination. In Fig. 17, we generatemultiple foregrounds for the same fixed generated background. We find that our model consistentlygenerates contextually compatible foregrounds. For example, for the grass-like backgrounds, theforeground generator generates horses and deer, and airplane-like objects for the blue sky.6.5.3 W ORD CLOUD BASED ON HUMAN STUDYAs we mentioned above, we conducted human studies on CIFAR-10. Besides asking people to selecta name from a list for an image, we also conducted another human study where we ask people to useone word (free-form) to describe the main object in the image. Each image was ‘named’ by 5 uniquepeople. We generate word clouds for real images, images generated by DCGAN and LR-GAN, asshown in Fig. 18.6.6 R ESULTS ON LFW FACE DATASETWe conduct experiment on face images in LFW dataset (Huang et al., 2007). Different from previousworks which work on cropped and aligned faces, we directly generate the original images whichcontains a large portion of backgrounds. This configuration helps to verify the efficiency of LR-GANto model the object appearance, shape and pose. In Fig. 19, we show the (intermediate) generationresults of LR-GAN. Surprisingly, without any supervisions, the model generated background andfaces in separate steps, and the generated masks accurately depict face shapes. Moreover, the model17Published as a conference paper at ICLR 2017Figure 17: Walking in the latent foreground space by fixing backgrounds in our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foregroundmasks, foreground images carved out by masks, carved out foreground images after spatial transfor-mation, and final composite images. Each row has the same background, but different foregrounds.Figure 18: Statistics of annotations in human studies on CIFAR-10. Left to right: word cloud forreal images, images generated by DCGAN, images generated by LR-GAN.Figure 19: Generation results of our model on LFW. From left to right, the blocks are: generatedbackground images, foreground images, foreground masks, carved out foreground images after spa-tial transformation, and final composite images.18Published as a conference paper at ICLR 2017learns where to place the generated faces so that the whole image looks natural. For comparison,please refer to (Kwak & Zhang, 2016) which does not model the transformation. We can find thegeneration results degrade much.6.7 S TATISTICS ON TRANSFORMATION MATRICESIn this part, we analyze the statistics on the transformation matrices generated by our model fordifferent datasets, including MNIST-ONE, CUB-200, CIFAR-10 and LFW. We used affine transfor-mation in our model. So there are 6 parameters, scaling in the x coordinate ( sx), scaling in the ycoordinate (sy), translation in the x coordinate ( tx), translation in the y coordinate ( ty), rotation inthe x coordinate ( rx) and rotation in the y coordinate ( ry). In Fig. 20, we show the histograms on dif-ferent parameters for different datasets.These histograms show that the model produces non-trivialvaried scaling, translation and rotation on all datasets. For different datasets, the learned transfor-mation have different patterns. We hypothesize that this is mainly determined by the configurationsof objects in the images. For example, on MNIST-ONE, all six parameters have some fluctuationssince the synthetic dataset contains digits randomly placed at different locations. For the other threedatasets, the scalings converge to single value since the object sizes do not vary much, and the vari-ations on rotation and translation suffice to generate realistic images. Specifically, we can find thegenerator largely relies on the translation on x coordinate for generating CUB-200. This makessense since birds in the images have similar scales, orientations but various horizontal locations. ForCIFAR-10, since there are 10 different object categories, the configurations are more diverse, hencethe generator uses all parameters for generation except for the scaling. For LFW, since faces havesimilar configurations, the learned transformations have less fluctuation as well. As a result, we cansee that LR-GAN indeed models the transformations on the foreground to generate images.6.8 C ONDITIONAL IMAGE GENERATIONConsidering our model can generate object-like masks (shapes) for images, we conducted an ex-periment to evaluate whether our model can be potentially used for image segmentation and objectdetection. We make some changes to the model. For the background generator, the input is a realimage instead of a random vector. Then the image is passed through an encoder to extract the hid-den features, which replaces the random vector z0and are fed to the background generator. For theforeground generator, we subtract the image generated by the background generator from the inputimage to obtain a residual image. Then this residual image is fed to the same encoder to get thehidden features, which are used as the input for foreground generator. In our conditional model,we want to reconstruct the image, so we add a reconstruction loss along with the adversarial loss.We train this conditional model on CIFAR-10. The (intermediate) outputs of the model is shownin Fig. 21. Interestingly, the model successfully learned to decompose the input images into back-ground and foreground. The background generator tends to do an image inpainting by generating acomplete background without object, while the foreground generator works as a segmentation modelto get object mask from the input image.Similarly, we also run the conditional LR-GAN on LFW dataset. As we can see in Fig. 22, the fore-ground generator automatically and consistently learned to generate the face regions, even thoughthere are large portion of background in the input images. In other words, the conditional LR-GANsuccessfully learned to detection faces in images. We suspect this success is due to that it has lowcost for the generator to generate similar images, and thus converge to the case that the first generatorgenerate background, and the second generator generate face images.Based on these experiments, we argue that our model can be possibly used for image segmentationand object detection in a generative and unsupervised manner. One future work would be verifyingthis by applying it to high-resolution and more complicate datasets.19Published as a conference paper at ICLR 2017Figure 20: Histograms of transformation parameters learnt in our model for different datasets. Fromleft to right, the datasets are: MNIST-ONE, CUB-200, CIFAR-10 and LFW. From top to bottom,they are scaling sx,sy, translation tx,ty, and rotation rx,ryinxandycoordinate, respectively.20Published as a conference paper at ICLR 2017Figure 21: Conditional generation results of our model on CIFAR-10. From left to right, the blocksare: real images, generated background images, foreground images, foreground masks, foregroundimages carved out by masks, carved foreground images after spatial transformation, and final com-posite (reconstructed) images.Figure 22: Conditional generation results of our model on LFW, displayed with the same layout toFig. 21.21
SJOMYqRNg
HJ1kmv9xx
ICLR.cc/2017/conference/-/paper373/official/review
{"title": "review", "rating": "6: Marginally above acceptance threshold", "review": "The paper presents an interesting framework for image generation, which stitches the foreground and background to form an image. This is obviously a reasonable approach there is clearly a foreground object. However, real world images are often quite complicated, which may contain multiple layers of composition, instead of a simple foreground-background layer. How would the proposed method deal with such situations?\n\nOverall, this is a reasonable work that approaches an important problem from a new angle. Yet, I think sizable efforts remain needed to make it a generic methodology. "}
review
2017
ICLR.cc/2017/conference
LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
["Jianwei Yang", "Anitha Kannan", "Dhruv Batra", "Devi Parikh"]
We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end-to-end manner with conventional gradient descent methods. The experiments demonstrate that LR-GAN can generate more natural images with objects that are more human recognizable than baseline GANs.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HJ1kmv9xx
https://openreview.net/pdf?id=HJ1kmv9xx
https://openreview.net/forum?id=HJ1kmv9xx&noteId=SJOMYqRNg
Published as a conference paper at ICLR 2017LR-GAN: L AYERED RECURSIVE GENERATIVE AD-VERSARIAL NETWORKS FOR IMAGE GENERATIONJianwei YangVirginia TechBlacksburg, V Ajw2yang@vt.eduAnitha KannanFacebook AI ResearchMenlo Park, CAakannan@fb.comDhruv Batraand Devi ParikhGeorgia Institute of TechnologyAtlanta, GAfdbatra, parikh g@gatech.eduABSTRACTWe present LR-GAN: an adversarial image generation model which takes scenestructure and context into account. Unlike previous generative adversarial net-works (GANs), the proposed GAN learns to generate image background and fore-grounds separately and recursively, and stitch the foregrounds on the backgroundin a contextually relevant manner to produce a complete natural image. For eachforeground, the model learns to generate its appearance, shape and pose. Thewhole model is unsupervised, and is trained in an end-to-end manner with gra-dient descent methods. The experiments demonstrate that LR-GAN can generatemore natural images with objects that are more human recognizable than DCGAN.1 I NTRODUCTIONGenerative adversarial networks (GANs) (Goodfellow et al., 2014) have shown significant promiseas generative models for natural images. A flurry of recent work has proposed improvements overthe original GAN work for image generation (Radford et al., 2015; Denton et al., 2015; Salimanset al., 2016; Chen et al., 2016; Zhu et al., 2016; Zhao et al., 2016), multi-stage image generationincluding part-based models (Im et al., 2016; Kwak & Zhang, 2016), image generation conditionedon input text or attributes (Mansimov et al., 2015; Reed et al., 2016b;a), image generation based on3D structure (Wang & Gupta, 2016), and even video generation (V ondrick et al., 2016).While the holistic ‘gist’ of images generated by these approaches is beginning to look natural, thereis clearly a long way to go. For instance, the foreground objects in these images tend to be deformed,blended into the background, and not look realistic or recognizable.One fundamental limitation of these methods is that they attempt to generate images without takinginto account that images are 2D projections of a 3D visual world, which has a lot of structures in it.This manifests as structure in the 2D images that capture this world. One example of this structureis that images tend to have a background, and foreground objects are placed in this background incontextually relevant ways.We develop a GAN model that explicitly encodes this structure. Our proposed model generates im-ages in a recursive fashion: it first generates a background, and then conditioned on the backgroundgenerates a foreground along with a shape (mask) and a pose (affine transformation) that togetherdefine how the background and foreground should be composed to obtain a complete image. Condi-tioned on this composite image, a second foreground and an associated shape and pose are generated,and so on. As a byproduct in the course of recursive image generation, our approach generates someobject-shape foreground-background masks in a completely unsupervised way, without access toanyobject masks for training. Note that decomposing a scene into foreground-background layers isa classical ill-posed problem in computer vision. By explicitly factorizing appearance and transfor-mation, LR-GAN encodes natural priors about the images that the same foreground can be ‘pasted’to the different backgrounds, under different affine transformations. According to the experiments,the absence of these priors result in degenerate foreground-background decompositions, and thusalso degenerate final composite images.Work was done while visiting Facebook AI Research.1Published as a conference paper at ICLR 2017Figure 1: Generation results of our model on CUB-200 (Welinder et al., 2010). It generates imagesin two timesteps. At the first timestep, it generates background images, while generates foregroundimages, masks and transformations at the second timestep. Then, they are composed to obtain thefinal images. From top left to bottom right (row major), the blocks are real images, generatedbackground images, foreground images, foreground masks, carved foreground images, carved andtransformed foreground images, final composite images, and their nearest neighbor real images inthe training set. Note that the model is trained in a completely unsupervised manner.We mainly evaluate our approach on four datasets: MNIST-ONE (one digit) and MNIST-TWO (twodigits) synthesized from MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009) andCUB-200 (Welinder et al., 2010). We show qualitatively (via samples) and quantitatively (via evalu-ation metrics and human studies on Amazon Mechanical Turk) that LR-GAN generates images thatglobally look natural andcontain clear background and object structures in them that are realisticand recognizable by humans as semantic entities. An experimental snapshot on CUB-200 is shownin Fig. 1. We also find that LR-GAN generates foreground objects that are contextually relevant tothe backgrounds (e.g., horses on grass, airplanes in skies, ships in water, cars on streets, etc.). Forquantitative comparison, besides existing metrics in the literature, we propose two new quantitativemetrics to evaluate the quality of generated images. The proposed metrics are derived from the suffi-cient conditions for the closeness between generated image distribution and real image distribution,and thus supplement existing metrics.2 R ELATED WORKEarly work in parametric texture synthesis was based on a set of hand-crafted features (Portilla &Simoncelli, 2000). Recent improvements in image generation using deep neural networks mainlyfall into one of the two stochastic models: variational autoencoders (V AEs) (Kingma et al., 2016)and generative adversarial networks (GANs) (Goodfellow et al., 2014). V AEs pair a top-down prob-abilistic generative network with a bottom up recognition network for amortized probabilistic infer-ence. Two networks are jointly trained to maximize a variational lower bound on the data likelihood.GANs consist of a generator and a discriminator in a minmax game with the generator aiming tofool the discriminator with its samples with the latter aiming to not get fooled.Sequential models have been pivotal for improved image generation using variational autoencoders:DRAW (Gregor et al., 2015) uses attention based recurrence conditioning on the canvas drawn sofar. In Eslami et al. (2016), a recurrent generative model that draws one object at a time to thecanvas was used as the decoder in V AE. These methods are yet to show scalability to natural images.Early compelling results using GANs used sequential coarse-to-fine multiscale generation and class-conditioning (Denton et al., 2015). Since then, improved training schemes (Salimans et al., 2016)and better convolutional structure (Radford et al., 2015) have improved the generation results using2Published as a conference paper at ICLR 2017GANs. PixelRNN (van den Oord et al., 2016) is also recently proposed to sequentially generates apixel at a time, along the two spatial dimensions.In this paper, we combine the merits of sequential generation with the flexibility of GANs. Ourmodel for sequential generation imbibes a recursive structure that more naturally mimics imagecomposition by inferring three components: appearance, shape, and pose. One closely related workcombining recursive structure with GAN is that of Im et al. (2016) but it does not explicitly modelobject composition and follows a similar paradigm as by Gregor et al. (2015). Another closely re-lated work is that of Kwak & Zhang (2016). It combines recursive structure and alpha blending.However, our work differs in three main ways: (1) We explicitly use a generator for modeling theforeground poses. That provides significant advantage for natural images, as shown by our ablationstudies; (2) Our shape generator is separate from the appearance generator. This factored repre-sentation allows more flexibility in the generated scenes; (3) Our recursive framework generatessubsequent objects conditioned on the current and previous hidden vectors, andpreviously gener-ated object. This allows for explicit contextual modeling among generated elements in the scene.See Fig. 17 for contextually relevant foregrounds generated for the same background, or Fig. 6 formeaningful placement of two MNIST digits relative to each.Models that provide supervision to image generation using conditioning variables have also beenproposed: Style/Structure GANs (Wang & Gupta, 2016) learns separate generative models for styleand structure that are then composed to obtain final images. In Reed et al. (2016a), GAN basedimage generation is conditioned on text and the region in the image where the text manifests, spec-ified during training via keypoints or bounding boxes. While not the focus of our work, the modelproposed in this paper can be easily extended to take into account these forms of supervision.3 P RELIMINARIES3.1 G ENERATIVE ADVERSARIAL NETWORKSGenerative Adversarial Networks (GANs) consist of a generator Gand a discriminator Dthat aresimultaneously trained with competing goals: The generator Gis trained to generate samples thatcan ‘fool’ a discriminator D, while the discriminator is trained to classify its inputs as either real(coming from the training dataset) or fake (coming from the samples of G). This competition leadsto a minmax formulation with a value function:minGmaxDExpdata (x)[log(D(x;D))] + E zpz(z)[log(1D(G(z;G);D))]; (1)where zis a random vector from a standard multivariate Gaussian or a uniform distribution pz(z),G(z;G)mapszto the data space, D(x)is the probability that xis real estimated by D. Theadvantage of the GANs formulation is that it lacks an explicit loss function and instead uses thediscriminator to optimize the generative model. The discriminator, in turn, only cares whether thesample it receives is on the data manifold, and not whether it exactly matches a particular trainingexample (as opposed to losses such as MSE). Hence, the discriminator provides a gradient signalonly when the generated samples do not lie on the data manifold so that the generator can readjustits parameters accordingly. This form of training enables learning the data manifold of the trainingset and not just optimizing to reconstruct the dataset, as in autoencoder and its variants.While the GANs framework is largely agnostic to the choice of GandD, it is clear that generativemodels with the ‘right’ inductive biases will be more effective in learning from the gradient infor-mation (Denton et al., 2015; Im et al., 2016; Gregor et al., 2015; Reed et al., 2016a; Yan et al., 2015).With this motivation, we propose a generator that models image generation via a recurrent process– in each time step of the recurrence, an object with its own appearance and shape is generated andwarped according to a generated pose to compose an image in layers.3.2 L AYERED STRUCTURE OF IMAGEAn image taken of our 3D world typically contains a layered structure. One way of representing animage layer is by its appearance and shape. As an example, an image xwith two layers, foregroundfand background bmay be factorized as:x=fm+b(1m); (2)3Published as a conference paper at ICLR 2017where mis the mask depicting the shapes of image layers, and the element wise multiplicationoperator. Some existing methods assume the access to the shape of the object either during training(Isola & Liu, 2013) or both at train and test time (Reed et al., 2016a; Yan et al., 2015). Representingimages in layered structure is even straightforward for video with moving objects (Darrell & Pent-land, 1991; Wang & Adelson, 1994; Kannan et al., 2005). V ondrick et al. (2016) generates videosby separately generating a fixed background and moving foregrounds. A similar way of generatingsingle image can be found in Kwak & Zhang (2016).Another way is modeling the layered structure with object appearance and pose as:x=ST(f;a) +b; (3)where fandbare foreground and background, respectively; ais the affine transformation; STisthe spatial transformation operator. Several works fall into this group (Roux et al., 2011; Huang &Murphy, 2015; Eslami et al., 2016). In Huang & Murphy (2015), images are decomposed into layersof objects with specific poses in a variational autoencoder framework, while the number of objects(i.e., layers) is adaptively estimated in Eslami et al. (2016).To contrast with these works, LR-GAN uses a layered composition, and the foreground layers si-multaneously model all three dominant factors of variation: appearance f, shape mand pose a. Wewill elaborate it in the following section.4 L AYERED RECURSIVE GAN (LR-GAN)The basic structure of LR-GAN is similar to GAN: it consists of a discriminator and a generator thatare simultaneously trained using the minmax formulation of GAN, as described in x.3.1. The keyinnovation of our work is the layered recursive generator, which is what we describe in this section.The generator in LR-GAN is recursive in that the image is constructed recursively using a recurrentnetwork. Layered in that each recursive step composes an object layer that is ‘pasted’ on the imagegenerated so far. Object layer at timestep tis parameterized by the following three constituents –‘canonical’ appearance ft, shape (or mask) mt, and pose (or affine transformation) atfor warpingthe object before pasting in the image composition.Fig. 2 shows the architecture of the LR-GAN with the generator architecture unrolled for generatingbackground x0(:=xb) and foreground x1andx2. At each time step t, the generator composes thenext image xtvia the following recursive computation:xt=ST(mt;at)|{z}affine transformed maskST(ft;at)|{z}affine transformed appearance+ (1ST(mt;at))xt1|{z}pasting on image composed so far;8t2[1;T](4)whereST(;at)is a spatial transformation operator that outputs the affine transformed version ofwithatindicating parameters of the affine transformation.Since our proposed model has an explicit transformation variable atthat is used to warp the object,it can learn a canonical object representation that can be re-used to generate scenes where the ob-ject occurs as mere transformations of it, such as different scales or rotations. By factorizing theappearance, shape and pose, the object generator can focus on separately capturing regularities inthese three factors that constitute an object. We will demonstrate in our experiments that removingthese factorizations from the model leads to its spending capacity in variability that may not solelybe about the object in Section 5.5 and 5.6.4.1 D ETAILS OF GENERATOR ARCHITECTUREFig. 2 shows our LR-GAN architecture in detail – we use different shapes to indicate different kindsof layers (convolutional, fractional convolutional, (non)linear, etc), as indicated by the legend. Ourmodel consists of two main pieces – a background generator Gband a foreground generator Gf.GbandGfdo not share parameters with each other. Gbcomputation happens only once, while Gfisrecurrent over time, i.e., all object generators share the same parameters. In the following, we willintroduce each module and connections between them.Temporal Connections . LR-GAN has two kinds of temporal connections – informally speaking,one on ‘top’ and one on ‘bottom’. The ‘top’ connections perform the act of sequentially ‘pasting’4Published as a conference paper at ICLR 2017G"#G$LSTMLSTMG"%G"&G'T"G"#LSTMG"%G"&G'T"DP"#E"#E",CCSSFractionalConvolutionalLayersConvolutionalLayers(Non)linearembeddinglayersandothersx$f12f2m2m42m45f15f5m5SpatialSamplerCompositorx2x6Realsamplez8z2z5Figure 2: LR-GAN architecture unfolded to three timesteps. It mainly consists of one backgroundgenerator, one foreground generator, temporal connections and one discriminator. The meaning ofeach component is explained in the legend.object layers (Eqn. 4). The ‘bottom’ connections are constructed by a LSTM on the noise vectorsz0;z1;z2. Intuitively, this noise-vector-LSTM provides information to the foreground generatorabout what else has been generated in past. Besides, when generating multiple objects, we use apooling layer Pcfand a fully-connected layer Ecfto extract the information from previous generatedobject response map. By this way, the model is able to ‘see’ previously generated objects.Background Generator . The background generator Gbis purposely kept simple. It takes the hiddenstate of noise-vector-LSTM h0las the input and passes it to a number of fractional convolutionallayers (also called ‘deconvolution’ layer in some papers) to generate images at its end. The outputof background generator xbwill be used as the canvas for the following generated foregrounds.Foreground Generator . The foreground generator Gfis used to generate an object with appearanceand shape. Correspondingly, Gfconsists of three sub-modules, Gcf, which is a common ‘trunk’whose outputs are shared by GifandGmf.Gifis used to generate the foreground appearance ft,whileGmfgenerates the mask mtfor the foreground. All three sub-modules consists of one ormore fractional convolutional layers combined with batch-normalization and nonlinear layers. Thegenerated foreground appearance and mask have the same spatial size as the background. The topofGmfis a sigmoid layer in order to generate one channel mask whose values range in (0;1).Spatial Transformer . To spatially transform foreground objects, we need to estimate the trans-formation matrix. As in Jaderberg et al. (2015), we predict the affine transformation matrix with alinear layerTfthat has six-dimensional outputs. Then based on the predicted transformation matrix,we use a grid generator Ggto generate the corresponding sampling coordinates in the input for eachlocation at the output. The generated foreground appearance and mask share the same transforma-tion matrix, and thus the same sampling grid. Given the grid, the sampler Swill simultaneouslysample the ftandmtto obtain ^ftand^mt, respectively. Different from Jaderberg et al. (2015),our sampler here normally performs downsampling, since the the foreground typically has smallersize than the background. Pixels in ^ftand^mtthat are from outside the extent of ftandmtare setto zero. Finally, ^ftand^mtare sent to the compositor Cwhich combines the canvas xt1and^ftthrough layered composition with blending weights given by ^mt(Eqn. 4).Pseudo-code for our approach and detailed model configuration are provided in the Appendix.5Published as a conference paper at ICLR 20174.2 N EWEVALUATION METRICSSeveral metrics have been proposed to evaluate GANs, such as Gaussian parzen window (Good-fellow et al., 2014), Generative Adversarial Metric (GAM) (Im et al., 2016) and Inception Score(Salimans et al., 2016). The common goal is to measure the similarity between the generated datadistributionPg(x) =G(z;z)and the real data distribution P(x). Most recently, Inception Scorehas been used in several works (Salimans et al., 2016; Zhao et al., 2016). However, it is an assymetricmetric and could be easily fooled by generating centers of data modes.In addition to these metrics, we present two new metrics based on the following intuition – a suf-ficient (but not necessary) condition for closeness of Pg(x)andP(x)is closeness of Pg(xjy)andP(xjy), i.e., distributions of generated data and real data conditioned on all possible variables ofinteresty, e.g., category label. One way to obtain this variable of interest yis via human annotation.Specifically, given the data sampled from Pg(x)andP(x), we ask people to label the category of thesamples according to some rules. Note that such human annotation is often easier than comparingsamples from the two distributions (e.g., because there is no 1:1 correspondence between samplesto conduct forced-choice tests).After the annotations, we need to verify whether the two distributions are similar in each category.Clearly, directly comparing the distributions Pg(xjy)andP(xjy)may be as difficult as compar-ingPg(x)andP(x). Fortunately, we can use Bayes rule and alternatively compare Pg(yjx)andP(yjx), which is a much easier task. In this case, we can simply train a discriminative model onthe samples from Pg(x)andP(x)together with the human annotations about categories of thesesamples. With a slight abuse of notation, we use Pg(yjx)andP(yjx)to denote probability outputsfrom these two classifiers (trained on generated samples vs trained on real samples). We can thenuse these two classifiers to compute the following two evaluation metrics:Adversarial Accuracy: Computes the classification accuracies achieved by these two classifiers ona validation set, which can be the training set or another set of real images sampled from P(x). IfPg(x)is close toP(x), we expect to see similar accuracies.Adversarial Divergence: Computes the KL divergence between Pg(yjx)andP(yjx). The lowerthe adversarial divergence, the closer two distributions are. The low bound for this metric is exactlyzero, which means Pg(yjx) =P(yjx)for all samples in the validation set.As discussed above, we need human efforts to label the real and generated samples. Fortunately, wecan further simplify this. Based on the labels given on training data, we split the training data intocategories, and train one generator for each category. With all these generators, we generate samplesof all categories. This strategy will be used in our experiments on the datasets with labels given.5 E XPERIMENTWe conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (LeCun et al.,1998); 2) CIFAR-10 (Krizhevsky & Hinton, 2009); 3) CUB-200 (Welinder et al., 2010). To addvariability to the MNIST images, we randomly scale (factor of 0.8 to 1.2) and rotate ( 4to4) thedigits and then stitch them to 4848uniform backgrounds with random grayscale value between[0, 200]. Images are then rescaled back to 3232. Each image thus has a different backgroundgrayscale value and a different transformed digit as foreground. We rename this sythensized datasetasMNIST-ONE (single digit on a gray background). We also synthesize a dataset MNIST-TWOcontaining two digits on a grayscale background. We randomly select two images of digits andperform similar transformations as described above, and put one on the left and the other on theright side of a 7878gray background. We resize the whole image to 6464.We develop LR-GAN based on open source code1. We assume the number of objects is known.Therefore, for MNIST-ONE, MNIST-TWO, CIFAR-10, and CUB-200, our model has two, three,two, and two timesteps, respectively. Since the size of foreground object should be smaller thanthat of canvas, we set the minimal allowed scale2in affine transforamtion to be 1.2 for all datasetsexcept for MNIST-TWO, which is set to 2 (objects are smaller in MNIST-TWO). In LR-GAN, the1https://github.com/soumith/dcgan.torch2Scale corresponds to the size of the target canvas with respect to the object – the larger the scale, the largerthe canvas, and the smaller the relative size of the object in the canvas. 1 means the same size as the canvas.6Published as a conference paper at ICLR 2017Figure 3: Generated images on CIFAR-10 based on our model.Figure 4: Generated images on CUB-200 based on our model.background generator and foreground generator have similar architectures. One difference is thatthe number of channels in the background generator is half of the one in the foreground generator.We compare our results to that of DCGAN (Radford et al., 2015). Note that LR-GAN withoutLSTM at the first timestep corresponds exactly to the DCGAN. This allows us to run controlledexperiments. In both generator and discriminator, all the (fractional) convolutional layers have 44filter size with stride 2. As a result, the number of layers in the generator and discriminatorautomatically adapt to the size of training images. Please see the Appendix (Section 6.2) for detailsabout the configurations. We use three metrics for quantitative evaluation, including Inception Score(Salimans et al., 2016) and the proposed Adversarial Accuracy, Adversarial Divergence. Note thatwe report two versions of Inception Score. One is based on the pre-trained Inception net, and theother one is based on the pre-trained classifier on the target datasets.5.1 Q UALITATIVE RESULTSIn Fig. 3 and 4, we show the generated samples for CIFAR-10 and CUB-200, respectively. MNISTresults are shown in the next subsection. As we can see from the images, the compositional natureof our model results in the images being free of blending artifacts between backgrounds and fore-grounds. For CIFAR-10, we can see the horses and cars with clear shapes. For CUB-200, the birdshapes tend to be even sharper.5.2 MNIST-ONE AND MNIST-TWOWe now report the results on MNIST-ONE and MNIST-TWO. Fig. 5 shows the generation results ofour model on MNIST-ONE. As we can see, our model generates the background and the foregroundin separate timestep, and can disentagle the foreground digits from background nearly perfectly.Though initial values of the mask randomly distribute in the range of (0, 1), after training, the masksare nearly binary and accurately carve out the digits from the generated foreground. More results onMNIST-ONE (including human studies) can be found in the Appendix (Section 6.3).Fig. 6 shows the generation results for MNIST-TWO. Similarly, the model is also able to generatebackground and the two foreground objects separately. The foreground generator tends to generatea single digit at each timestep. Meanwhile, it captures the context information from the previoustime steps. When the first digit is placed to the left side, the second one tends to be placed on theright side, and vice versa.7Published as a conference paper at ICLR 2017Figure 5: Generation results of our model on MNIST-ONE. From left to right, the image blocks arereal images, generated background images, generated foreground images, generated masks and finalcomposite images, respectively.Figure 6: Generation results of our model on MNIST-TWO. From top left to bottom right (rowmajor), the image blocks are real images, generated background images, foreground images andmasks at the second timestep, composite images at the second time step, generated foregroundimages and masks at the third timestep and the final composite images, respectively.5.3 CUB-200We study the effectiveness of our model trained on the CUB-200 bird dataset. In Fig. 1, we haveshown a random set of generated images, along with the intermediate generation results of the model.While being completely unsupervised , the model, for a large fraction of the samples, is able toFigure 7: Matched pairs of generated images based on DCGAN and LR-GAN, respectivedly. Theodd columns are generated by DCGAN, and the even columns are generated by LR-GAN. Theseare paired according to the perfect matching based on Hungarian algorithm.8Published as a conference paper at ICLR 2017Figure 8: Qualitative comparison on CIFAR-10. Top three rows are images generated by DCGAN;Bottom three rows are by LR-GAN. From left to right, the blocks display generated images withincreasing quality level as determined by human studies.successfully disentangle the foreground and the background. This is evident from the generatedbird-like masks.We do a comparative study based on Amazon Mechanical Turk (AMT) between DCGAN and LR-GAN to quantify relative visual quality of the generated images. We first generated 1000 samplesfrom both the models. Then, we performed perfect matching between the two image sets usingthe Hungarian algorithm on L2norm distance in the pixel space. This resulted in 1000 imagepairs. Some examplar pairs are shown in Fig. 7. For each image pair, 9 judges are asked to choosethe one that is more realistic. Based on majority voting, we find that our generated images areselected 68.4% times, compared with 31.6% times for DCGAN. This demonstrates that our modelhas generated more realistic images than DCGAN. We can attribute this difference to our model’sability to generate foreground separately from the background, enabling stronger edge cues.5.4 CIFAR-10We now qualitatively and quantitatively evaluate our model on CIFAR-10, which contains multipleobject categories and also various backgrounds.Comparison of image generation quality: We conduct AMT studies to compare the fidelity ofimage generation. Towards this goal, we generate 1000 images from DCGAN and LR-GAN, re-spectively. We ask 5 judges to label each image to either belong to one of the 10 categories or as‘non recognizable’ or ‘recognizable but not belonging to the listed categories’. We then assign eachimage a quality level between [0,5] that captures the number of judges that agree with the majoritychoice. Fig. 8 shows the images generated by both approaches, ordered by increasing quality level.We merge images at quality level 0 (all judges said non-recognizable) and 1 together, and similarlyimages at level 4 and 5. Visually, the generated samples by our model have clearer boundaries andobject structures. We also computed the fraction of non-recognizable images: Our model had a 10%absolute drop in non-recognizability rate (67.3% for ours vs. 77.7% for DCGAN). For reference,11.4% of real CIFAR images were categorized as non-recognizable. Fig. 9 shows more generated(intermediate) results of our model.Quantitative evaluation on generators: We evaluate the generators based on three metrics: 1)Inception Score; 2) Adversarial Accuracy; 3) Adversarial Divergence. To obtain a classifier modelfor evaluation, we remove the top layer in the discriminator used in our model, and then appendtwo fully connected layers on the top of it. We train this classifier using the training samples ofCIFAR-10 based on the annotations. Following Salimans et al. (2016), we generated 50,000 imagesTable 1: Quantitative comparison between DCGAN and LR-GAN on CIFAR-10.Training Data Real Images DCGAN OursInception Scorey11.180.18 6.64 0.14 7.17 0.07Inception Scoreyy7.230.09 5.69 0.07 6.11 0.06Adversarial Accuracy 83.33 0.08 37.81 0.02 44.22 0.08Adversarial Divergence 0 7.58 0.04 5.57 0.06yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.9Published as a conference paper at ICLR 2017Figure 9: Generation results of our model on CIFAR-10. From left to right, the blocks are: gener-ated background images, foreground images, foreground masks, foreground images carved out bymasks, carved foregrounds after spatial transformation, final composite images and nearest neighbortraining images to the generated images.Figure 10: Category specific generation results of our model on CIFAR-10 categories of horse, frog,and cat (top to bottom). The blocks from left to right are: generated background images, foregroundimages, foreground masks, foreground images carved out by masks, carved foregrounds after spatialtransformation and final composite images.based on DCGAN and LR-GAN, repsectively. We compute two types of Inception Scores. Thestandard Inception Score is based on the Inception net as in Salimans et al. (2016), and the contex-tual Inception Score is based on our trained classifier model. To distinguish, we denote the standardone as ‘Inception Scorey’, and the contextual one as ‘Inception Scoreyy’. To obtain the AdversarialAccuracy and Adversarial Divergence scores, we train one generator on each of 10 categories forDCGAN and LR-GAN, respectively. Then, we use these generators to generate samples of differentcategories. Given these generated samples, we train the classifiers for DCGAN and LR-GAN sepa-rately. Along with the classifier trained on the real samples, we compute the Adversarial Accuracy10Published as a conference paper at ICLR 2017and Adversarial Divergence on the real training samples. In Table 1, we report the Inception Scores,Adversarial Accuracy and Adversarial Divergence for comparison. We can see that our model out-performs DCGAN across the board. To point out, we obtan different Inception Scores based ondifferent classifier models, which indicates that the Inception Score varies with different models.Quantitative evaluation on discriminators: We evaluate the discriminator as an extractor for deeprepresentations. Specifically, we use the output of the last convolutional layer in the discriminatoras features. We perform a 1-NN classification on the test set given the full training set. Cosinesimilarity is used as the metric. On the test set, our model achieves 62.09% 0.01% compared toDCGAN’s 56.05% 0.02%.Contextual generation: We also show the efficacy of our approach to generate diverse foregroundsconditioned on fixed background. The results in Fig. 17 in Appendix showcase that the foregroundgenerator generates objects that are compatible with the background. This indicates that the modelhas captured contextual dependencies between the image layers.Category specific models: The objects in CIFAR-10 exhibit huge variability in shapes. That canpartly explain why some of the generated shapes are not as compelling in Fig. 9. To test this hy-pothesis, we reuse the generators trained for each of 10 categories used in our metrics to obtain thegeneration results. Fig. 10 shows results for categories ‘horse’, ‘frog’ and ‘cat’. We can see that themodel is now able to generate object-specific appearances and shapes, similar in vein to our resultson the CUB-200 dataset.5.5 I MPORTANCE OF TRANSFORMATIONSFigure 11: Generation results from an ablated LR-GAN model without affine transformations. Fromtop to bottom, the block rows correspond to different datasets: MNIST-ONE, CUB-200, CIFAR-10.From left to right, the blocks show generated background images, foreground images, foregroundmasks, and final composite images. For comparison, the rightmost column block shows final gener-ated images from a non-ablated model with affine transformations.Fig. 11 shows results from an ablated model without affine transformations in the foreground layers,and compares the results with the full model that does include these transformations. We note thatone significant problem emerges that the decompositions are degenerate, in the sense that the modelis unable to break the symmetry between foreground and background layers, often generating objectappearances in the model’s background layer and vice versa. For CUB-200, the final generated im-ages have some blendings between foregrounds and backgrounds. This is particularly the case for11Published as a conference paper at ICLR 2017Figure 12: Generation results from an ablated LR-GAN model without mask generator. The blockrows correspond to different datasets (from top to bottom: MNIST-ONE, CUB-200, CIFAR-10).From left to right, the blocks show generated background images, foreground images, transformedforeground images, and final composite images. For comparison, the rightmost column block showsfinal generated images from a non-ablated model with mask generator.those images without bird-shape masks. For CIFAR-10, a number of generated masks are inverted.In this case, the background images are carved out as the foreground objects. The foreground gener-ator takes almost all the duty to generate the final images, which make it harder to generate imagesas clear as the model with transformation. From these comparisons, we qualitatively demonstratethe importance of modeling transformations in the foreground generation process. Another merit ofusing transformation is that the intermediate outputs of the model are more interpretable and faciliateto the downstreaming tasks, such as scene paring, which is demonstrated in Section 6.8.5.6 I MPORTANCE OF SHAPESWe perform another ablation study by removing the mask generator to understand the importanceof modeling object shapes. In this case, the generated foreground is simply pasted on top of thegenerated background after being transformed. There is no alpha blending between the foregroundsand backgrounds. The generation results for three datasets, MNIST-ONE, CUB-200, CIFAR-10 areshown in Fig. 12. As we can see, though the model works well for the generation of MNIST-ONE, itfails to generate reasonable images across the other datasets. Particularly, the training does not evenconverge for CUB-200. Based on these results, we qualitatively demonstrate that mask generator inour model is fairly important to obtain plausible results, especially for realistic images.REFERENCESXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Trevor Darrell and Alex Pentland. Robust estimation of a multi-layered motion representation. IEEEWorkshop on Visual Motion , 1991.12Published as a conference paper at ICLR 2017Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geof-frey E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. CoRR ,abs/1603.08575, 2016.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: Arecurrent neural network for image generation. arXiv preprint arXiv:1502.046239 , 2015.Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild:A database for studying face recognition in unconstrained environments. Technical Report 07-49,University of Massachusetts, Amherst, October 2007.Jonathan Huang and Kevin Murphy. Efficient inference in occlusion-aware generative models ofimages. CoRR , abs/1511.06362, 2015.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Phillip Isola and Ce Liu. Scene collaging: Analysis and synthesis of natural images with semanticlayers. In IEEE International Conference on Computer Vision , pp. 3048–3055, 2013.Max Jaderberg, Karen Simonyan, Andrew Zisserman, and koray kavukcuoglu. Spatial transformernetworks. In Advances in Neural Information Processing Systems 28 , pp. 2017–2025, 2015.Anitha Kannan, Nebojsa Jojic, and Brendan Frey. Generative model for layers of appearance anddeformation. AISTATS , 2005.Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. arXiv preprint arXiv:1606.04934 , 2016.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hanock Kwak and Byoung-Tak Zhang. Generating images part by part with composite generativeadversarial networks. arXiv preprint arXiv:1607.05387 , 2016.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating imagesfrom captions with attention. arXiv preprint arXiv:1511.02793 , 2015.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn-ing what and where to draw. arXiv preprint arXiv:1610.02454 , 2016a.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 , 2016b.Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model ofimages by factoring appearance and shape. Neural Computation , 23:593–650, 2011.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.13Published as a conference paper at ICLR 2017A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.CoRR , abs/1601.06759, 2016.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.arXiv preprint arXiv:1609.02612 , 2016.John Wang and Edward Adelson. Representing moving images with layers. IEEE Transactions onImage Processing , 1994.Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar-ial networks. arXiv preprint arXiv:1603.05631 , 2016.P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSDBirds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional imagegeneration from visual attributes. CoRR , abs/1512.00570, 2015.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.Jun-Yan Zhu, Philipp Kr ̈ahenb ̈uhl, Eli Shechtman, and Alexei A Efros. Generative visual manipu-lation on the natural image manifold. In European Conference on Computer Vision , pp. 597–613.Springer, 2016.6 A PPENDIX6.1 A LGORITHMAlgo. 1 illustrates the generative process in our model. g(?)evaluates the function gat?.is acomposition operator that composes its operands so that fg(?) =f(g(?)).Algorithm 1 Stochastic Layered Recursive Image Generation1:z0N(0;I)2:x0=Gb(z0) .background generator3:h0l 04:c0l 05:fort2[1T]do6: ztN(0;I)7: htl,ctl LSTM([ zt,ht1l,ct1l]) .pass through LSTM8: ift = 1 then9: yt htl10: else11: yt Elf([htlht1f]) .pass through non-linear embedding layers Elf12: end if13: st Gcf(yt) .predict shared cube for GifandGmf14:at Tf(yt) .object transformation15: ft Gif(st) .generate object appearance16: mt Gmf(st) .generate object shape17: htf EcfPcf(st) .predict shared represenation embedding18: xt ST(mt;at)ST(ft;at) + (1ST(mt;at))xt119:end for6.2 M ODEL CONFIGURATIONSTable 2 lists the information and model configuration for different datasets. The dimensions ofrandom vectors and hidden vectors are all set to 100. We also compare the number of parameters inDCGAN and LR-GAN. The numbers before ‘/’ are our model, after ‘/’ are DCGAN. Based on thesame notation used in (Zhao et al., 2016), the architectures for the different datasets are:14Published as a conference paper at ICLR 2017Table 2: Information and model configurations on different datasets.Dataset MNIST-ONE MNIST-TWO CIFAR-10 CUB-200Image Size 32 64 32 64#Images 60,000 60,000 50,000 5,994#Timesteps 2 3 2 2#Parameters 5.25M/4.11M 7.53M/6.33M 5.26M/4.11M 27.3M/6.34MMNIST-ONE: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1MNIST-TWO: Gb: (256)4c-(128)4c2s-(64)4c2s-(32)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s-(512)4p4s-1CUB-200: Gb: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (128)4c2s-(256)4c2s-(512)4c2s-(1024)4c2s-(1024)4p4s-1CIFAR-10: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s;Gif: (3)4c2s; Gmf: (1)4c2sD: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-16.3 R ESULTS ON MNIST-ONEWe conduct human studies on generation results on MNIST-ONE. Specifically, we generate 1,000images using both LR-GAN and DCGAN. As references, we also include 1000 real images. Thenwe ask the users on AMT to label each image to be one of the digits (0-9). We also provide theman option ‘non recognizable’ in case the generated image does not seem to contain a digit. Eachimage was judged by 5 unique workers. Similar to CIFAR-10, if an image is recognized to be thesame digit by all 5 users, it is assigned to quality level 5. If it is not recognizable according to allusers, it is assigned to quality level 0. Fig. 13 (left) shows the number of images assigned to all sixquality levels. Compared to DCGAN, our model generated more samples with high quality levels.As expected, the real images have many samples with high quality levels. In Fig. 13 (right), we showthe number of images that are recognized to each digit category (0-9). For qualitative comparison,we show examplar images at each quality level in Fig. 14. From left to right, the quality levelincreases from 0 to 5. As expected, the images with higher quality level are more clear.For quantitative evaluation, we use the same way as for CIFAR-10. The classifier model used forcontextual Inception Score is trained based on the training set. We generate 60,000 samples basedon DCGAN and LR-GAN for evaluation, respectively. To obtain the Adversarial Accuracy andAdversarial Divergence, we first train 10 generators for 10 digit categories separately, and then usethe generated samples to train the classifier. As shown in Table 3, our model has higher scores thanDCGAN on both standard and contextual Inception Score. Also, our model has a slightly higherFigure 13: Statistics of annotations in human studies on MNIST-ONE. Left: distribution of qualitylevel; Right: distribution of recognized digit categories.15Published as a conference paper at ICLR 2017Figure 14: Qualitative comparison on MNIST-ONE. Top three rows are samples generated by DC-GAN. Bottom three rows are samples generated by LR-GAN. The quality level increases from leftto right as determined via human studies.Table 3: Quantitative comparison on MNIST-ONE.Training Data Real Images DCGAN OursInception Scorey1.830.01 2.03 0.01 2.06 0.01Inception Scoreyy9.150.04 6.42 0.03 7.15 0.04Adversarial Accuracy 95.22 0.25 26.12 0.07 26.61 0.06Adversarial Divergence Score 0 8.47 0.03 8.39 0.04yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.adversarial accuracy, and lower adversarial divergence than DCGAN. We find that the all threeimage sets have low standard Inception Scores. This is mainly because the Inception net is trainedon ImageNet, which has a very different data distribution from the MNIST dataset. Based on this,we argue that the standard Inception Score is not suitable for some image datasets.6.4 M ORE RESULTS ON CUB-200In this experiment, we reduce the minimal allowed object scale to 1.1, which allows the model togenerate larger foreground objects. The results are shown in Fig. 15. Similar to the results when theconstraint is 1.2, the crisp bird-like masks are generated automatically by our model.Figure 15: Generation results of our model on CUB-200 when setting minimal allowed scale to1.1. From left to right, the blocks show the generated background images, foreground images,foreground masks, foreground images carved out by masks, carved foreground images after spatialtransformation. The sixth and seventh blocks are final composite images and the nearest neighborreal images.16Published as a conference paper at ICLR 20176.5 M ORE RESULTS ON CIFAR-106.5.1 Q UALITATIVE RESULTSIn Fig. 16, we show more results on CIFAR-10 when setting minimal allowed object scale to 1.1.The rightmost column block also shows the training images that are closest to the generated images(cosine similarity in pixel space). We can see our model does not memorize the training data.Figure 16: Generation results of our model on CIFAR-10 with minimal allowed scale be 1.1, Fromleft to right, the layout is same to Fig. 15.6.5.2 W ALKING IN THE LATENT SPACESimilar to DCGAN, we also show results by walking in the latent space. Note that our model hastwo or more inputs. So we can walk along any of them or their combination. In Fig. 17, we generatemultiple foregrounds for the same fixed generated background. We find that our model consistentlygenerates contextually compatible foregrounds. For example, for the grass-like backgrounds, theforeground generator generates horses and deer, and airplane-like objects for the blue sky.6.5.3 W ORD CLOUD BASED ON HUMAN STUDYAs we mentioned above, we conducted human studies on CIFAR-10. Besides asking people to selecta name from a list for an image, we also conducted another human study where we ask people to useone word (free-form) to describe the main object in the image. Each image was ‘named’ by 5 uniquepeople. We generate word clouds for real images, images generated by DCGAN and LR-GAN, asshown in Fig. 18.6.6 R ESULTS ON LFW FACE DATASETWe conduct experiment on face images in LFW dataset (Huang et al., 2007). Different from previousworks which work on cropped and aligned faces, we directly generate the original images whichcontains a large portion of backgrounds. This configuration helps to verify the efficiency of LR-GANto model the object appearance, shape and pose. In Fig. 19, we show the (intermediate) generationresults of LR-GAN. Surprisingly, without any supervisions, the model generated background andfaces in separate steps, and the generated masks accurately depict face shapes. Moreover, the model17Published as a conference paper at ICLR 2017Figure 17: Walking in the latent foreground space by fixing backgrounds in our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foregroundmasks, foreground images carved out by masks, carved out foreground images after spatial transfor-mation, and final composite images. Each row has the same background, but different foregrounds.Figure 18: Statistics of annotations in human studies on CIFAR-10. Left to right: word cloud forreal images, images generated by DCGAN, images generated by LR-GAN.Figure 19: Generation results of our model on LFW. From left to right, the blocks are: generatedbackground images, foreground images, foreground masks, carved out foreground images after spa-tial transformation, and final composite images.18Published as a conference paper at ICLR 2017learns where to place the generated faces so that the whole image looks natural. For comparison,please refer to (Kwak & Zhang, 2016) which does not model the transformation. We can find thegeneration results degrade much.6.7 S TATISTICS ON TRANSFORMATION MATRICESIn this part, we analyze the statistics on the transformation matrices generated by our model fordifferent datasets, including MNIST-ONE, CUB-200, CIFAR-10 and LFW. We used affine transfor-mation in our model. So there are 6 parameters, scaling in the x coordinate ( sx), scaling in the ycoordinate (sy), translation in the x coordinate ( tx), translation in the y coordinate ( ty), rotation inthe x coordinate ( rx) and rotation in the y coordinate ( ry). In Fig. 20, we show the histograms on dif-ferent parameters for different datasets.These histograms show that the model produces non-trivialvaried scaling, translation and rotation on all datasets. For different datasets, the learned transfor-mation have different patterns. We hypothesize that this is mainly determined by the configurationsof objects in the images. For example, on MNIST-ONE, all six parameters have some fluctuationssince the synthetic dataset contains digits randomly placed at different locations. For the other threedatasets, the scalings converge to single value since the object sizes do not vary much, and the vari-ations on rotation and translation suffice to generate realistic images. Specifically, we can find thegenerator largely relies on the translation on x coordinate for generating CUB-200. This makessense since birds in the images have similar scales, orientations but various horizontal locations. ForCIFAR-10, since there are 10 different object categories, the configurations are more diverse, hencethe generator uses all parameters for generation except for the scaling. For LFW, since faces havesimilar configurations, the learned transformations have less fluctuation as well. As a result, we cansee that LR-GAN indeed models the transformations on the foreground to generate images.6.8 C ONDITIONAL IMAGE GENERATIONConsidering our model can generate object-like masks (shapes) for images, we conducted an ex-periment to evaluate whether our model can be potentially used for image segmentation and objectdetection. We make some changes to the model. For the background generator, the input is a realimage instead of a random vector. Then the image is passed through an encoder to extract the hid-den features, which replaces the random vector z0and are fed to the background generator. For theforeground generator, we subtract the image generated by the background generator from the inputimage to obtain a residual image. Then this residual image is fed to the same encoder to get thehidden features, which are used as the input for foreground generator. In our conditional model,we want to reconstruct the image, so we add a reconstruction loss along with the adversarial loss.We train this conditional model on CIFAR-10. The (intermediate) outputs of the model is shownin Fig. 21. Interestingly, the model successfully learned to decompose the input images into back-ground and foreground. The background generator tends to do an image inpainting by generating acomplete background without object, while the foreground generator works as a segmentation modelto get object mask from the input image.Similarly, we also run the conditional LR-GAN on LFW dataset. As we can see in Fig. 22, the fore-ground generator automatically and consistently learned to generate the face regions, even thoughthere are large portion of background in the input images. In other words, the conditional LR-GANsuccessfully learned to detection faces in images. We suspect this success is due to that it has lowcost for the generator to generate similar images, and thus converge to the case that the first generatorgenerate background, and the second generator generate face images.Based on these experiments, we argue that our model can be possibly used for image segmentationand object detection in a generative and unsupervised manner. One future work would be verifyingthis by applying it to high-resolution and more complicate datasets.19Published as a conference paper at ICLR 2017Figure 20: Histograms of transformation parameters learnt in our model for different datasets. Fromleft to right, the datasets are: MNIST-ONE, CUB-200, CIFAR-10 and LFW. From top to bottom,they are scaling sx,sy, translation tx,ty, and rotation rx,ryinxandycoordinate, respectively.20Published as a conference paper at ICLR 2017Figure 21: Conditional generation results of our model on CIFAR-10. From left to right, the blocksare: real images, generated background images, foreground images, foreground masks, foregroundimages carved out by masks, carved foreground images after spatial transformation, and final com-posite (reconstructed) images.Figure 22: Conditional generation results of our model on LFW, displayed with the same layout toFig. 21.21
ByZ4ZTVNe
HJ1kmv9xx
ICLR.cc/2017/conference/-/paper373/official/review
{"title": "Layerwise image generation.", "rating": "7: Good paper, accept", "review": "The authors propose a method that generates naturally looking images by first generating the background and then conditioned on the previous layer one or multiple foreground objects. Additionally they add a image transformer layer that allows the model to more easily model different appearances.\n\nI would like to see some discussion about the choice of foreground+mask rather than just predicting foreground directly. For MNIST, for example the foreground seems completely irrelevant. For CUB and CIFAR of course the fg adds the texture and color while the masks ensures a crisp boundary. \n- Is the mask a binary mask or a alpha blending mask?\n- I find the fact that the model learns to decompose images this nicely and learns to produce crisp foreground masks w/o too much spurious elements (though there are some in CIFAR) pretty fascinating.\n\nThe proposed evaluation metric makes sense and seems reasonable. However, AFAICT, theoretically it would be possible to get a high score even though the GAN produces images not recognizable to humans, but only to the classifier network that produces P_g. E.g. if the Generator encodes the class in some subtle way (though this shouldn't happen given the training with an adversarial network).\n\nFig 3 shows indeed nicely that the decomposition is much nicer when spatial transformers are used. However, it also seems to indicate that the foreground prediction and the foreground mask are largely redundant. For the final results the \"niceness\" of the decomposition appears to be largely irrelevant.\n\nFurthermore, the transformation layer seems to have a small effect, judging from the transformed masked foreground objects. They are mainly scaled down.\n\n- What is the 3rd & 6th column in Fig 9? It is not clear if the final composed images are really as bad as \"advertised\".\n\nRegarding the eval experiment using AMT it is not clear why it is better to provide the users with L2 minimized NN matches rather than random pairs.\n\nI assume that Tab 1 Adversarial Divergence for Real images was not actually evaluated? It would be interesting to see how close to 0 multiple differently initialized networks actually are. Also please mention how the confidences/std where generated, i.e. different training sets, initialisations, eval sets, and how many runs.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
["Jianwei Yang", "Anitha Kannan", "Dhruv Batra", "Devi Parikh"]
We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end-to-end manner with conventional gradient descent methods. The experiments demonstrate that LR-GAN can generate more natural images with objects that are more human recognizable than baseline GANs.
["Computer vision", "Deep learning", "Unsupervised Learning"]
https://openreview.net/forum?id=HJ1kmv9xx
https://openreview.net/pdf?id=HJ1kmv9xx
https://openreview.net/forum?id=HJ1kmv9xx&noteId=ByZ4ZTVNe
Published as a conference paper at ICLR 2017LR-GAN: L AYERED RECURSIVE GENERATIVE AD-VERSARIAL NETWORKS FOR IMAGE GENERATIONJianwei YangVirginia TechBlacksburg, V Ajw2yang@vt.eduAnitha KannanFacebook AI ResearchMenlo Park, CAakannan@fb.comDhruv Batraand Devi ParikhGeorgia Institute of TechnologyAtlanta, GAfdbatra, parikh g@gatech.eduABSTRACTWe present LR-GAN: an adversarial image generation model which takes scenestructure and context into account. Unlike previous generative adversarial net-works (GANs), the proposed GAN learns to generate image background and fore-grounds separately and recursively, and stitch the foregrounds on the backgroundin a contextually relevant manner to produce a complete natural image. For eachforeground, the model learns to generate its appearance, shape and pose. Thewhole model is unsupervised, and is trained in an end-to-end manner with gra-dient descent methods. The experiments demonstrate that LR-GAN can generatemore natural images with objects that are more human recognizable than DCGAN.1 I NTRODUCTIONGenerative adversarial networks (GANs) (Goodfellow et al., 2014) have shown significant promiseas generative models for natural images. A flurry of recent work has proposed improvements overthe original GAN work for image generation (Radford et al., 2015; Denton et al., 2015; Salimanset al., 2016; Chen et al., 2016; Zhu et al., 2016; Zhao et al., 2016), multi-stage image generationincluding part-based models (Im et al., 2016; Kwak & Zhang, 2016), image generation conditionedon input text or attributes (Mansimov et al., 2015; Reed et al., 2016b;a), image generation based on3D structure (Wang & Gupta, 2016), and even video generation (V ondrick et al., 2016).While the holistic ‘gist’ of images generated by these approaches is beginning to look natural, thereis clearly a long way to go. For instance, the foreground objects in these images tend to be deformed,blended into the background, and not look realistic or recognizable.One fundamental limitation of these methods is that they attempt to generate images without takinginto account that images are 2D projections of a 3D visual world, which has a lot of structures in it.This manifests as structure in the 2D images that capture this world. One example of this structureis that images tend to have a background, and foreground objects are placed in this background incontextually relevant ways.We develop a GAN model that explicitly encodes this structure. Our proposed model generates im-ages in a recursive fashion: it first generates a background, and then conditioned on the backgroundgenerates a foreground along with a shape (mask) and a pose (affine transformation) that togetherdefine how the background and foreground should be composed to obtain a complete image. Condi-tioned on this composite image, a second foreground and an associated shape and pose are generated,and so on. As a byproduct in the course of recursive image generation, our approach generates someobject-shape foreground-background masks in a completely unsupervised way, without access toanyobject masks for training. Note that decomposing a scene into foreground-background layers isa classical ill-posed problem in computer vision. By explicitly factorizing appearance and transfor-mation, LR-GAN encodes natural priors about the images that the same foreground can be ‘pasted’to the different backgrounds, under different affine transformations. According to the experiments,the absence of these priors result in degenerate foreground-background decompositions, and thusalso degenerate final composite images.Work was done while visiting Facebook AI Research.1Published as a conference paper at ICLR 2017Figure 1: Generation results of our model on CUB-200 (Welinder et al., 2010). It generates imagesin two timesteps. At the first timestep, it generates background images, while generates foregroundimages, masks and transformations at the second timestep. Then, they are composed to obtain thefinal images. From top left to bottom right (row major), the blocks are real images, generatedbackground images, foreground images, foreground masks, carved foreground images, carved andtransformed foreground images, final composite images, and their nearest neighbor real images inthe training set. Note that the model is trained in a completely unsupervised manner.We mainly evaluate our approach on four datasets: MNIST-ONE (one digit) and MNIST-TWO (twodigits) synthesized from MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009) andCUB-200 (Welinder et al., 2010). We show qualitatively (via samples) and quantitatively (via evalu-ation metrics and human studies on Amazon Mechanical Turk) that LR-GAN generates images thatglobally look natural andcontain clear background and object structures in them that are realisticand recognizable by humans as semantic entities. An experimental snapshot on CUB-200 is shownin Fig. 1. We also find that LR-GAN generates foreground objects that are contextually relevant tothe backgrounds (e.g., horses on grass, airplanes in skies, ships in water, cars on streets, etc.). Forquantitative comparison, besides existing metrics in the literature, we propose two new quantitativemetrics to evaluate the quality of generated images. The proposed metrics are derived from the suffi-cient conditions for the closeness between generated image distribution and real image distribution,and thus supplement existing metrics.2 R ELATED WORKEarly work in parametric texture synthesis was based on a set of hand-crafted features (Portilla &Simoncelli, 2000). Recent improvements in image generation using deep neural networks mainlyfall into one of the two stochastic models: variational autoencoders (V AEs) (Kingma et al., 2016)and generative adversarial networks (GANs) (Goodfellow et al., 2014). V AEs pair a top-down prob-abilistic generative network with a bottom up recognition network for amortized probabilistic infer-ence. Two networks are jointly trained to maximize a variational lower bound on the data likelihood.GANs consist of a generator and a discriminator in a minmax game with the generator aiming tofool the discriminator with its samples with the latter aiming to not get fooled.Sequential models have been pivotal for improved image generation using variational autoencoders:DRAW (Gregor et al., 2015) uses attention based recurrence conditioning on the canvas drawn sofar. In Eslami et al. (2016), a recurrent generative model that draws one object at a time to thecanvas was used as the decoder in V AE. These methods are yet to show scalability to natural images.Early compelling results using GANs used sequential coarse-to-fine multiscale generation and class-conditioning (Denton et al., 2015). Since then, improved training schemes (Salimans et al., 2016)and better convolutional structure (Radford et al., 2015) have improved the generation results using2Published as a conference paper at ICLR 2017GANs. PixelRNN (van den Oord et al., 2016) is also recently proposed to sequentially generates apixel at a time, along the two spatial dimensions.In this paper, we combine the merits of sequential generation with the flexibility of GANs. Ourmodel for sequential generation imbibes a recursive structure that more naturally mimics imagecomposition by inferring three components: appearance, shape, and pose. One closely related workcombining recursive structure with GAN is that of Im et al. (2016) but it does not explicitly modelobject composition and follows a similar paradigm as by Gregor et al. (2015). Another closely re-lated work is that of Kwak & Zhang (2016). It combines recursive structure and alpha blending.However, our work differs in three main ways: (1) We explicitly use a generator for modeling theforeground poses. That provides significant advantage for natural images, as shown by our ablationstudies; (2) Our shape generator is separate from the appearance generator. This factored repre-sentation allows more flexibility in the generated scenes; (3) Our recursive framework generatessubsequent objects conditioned on the current and previous hidden vectors, andpreviously gener-ated object. This allows for explicit contextual modeling among generated elements in the scene.See Fig. 17 for contextually relevant foregrounds generated for the same background, or Fig. 6 formeaningful placement of two MNIST digits relative to each.Models that provide supervision to image generation using conditioning variables have also beenproposed: Style/Structure GANs (Wang & Gupta, 2016) learns separate generative models for styleand structure that are then composed to obtain final images. In Reed et al. (2016a), GAN basedimage generation is conditioned on text and the region in the image where the text manifests, spec-ified during training via keypoints or bounding boxes. While not the focus of our work, the modelproposed in this paper can be easily extended to take into account these forms of supervision.3 P RELIMINARIES3.1 G ENERATIVE ADVERSARIAL NETWORKSGenerative Adversarial Networks (GANs) consist of a generator Gand a discriminator Dthat aresimultaneously trained with competing goals: The generator Gis trained to generate samples thatcan ‘fool’ a discriminator D, while the discriminator is trained to classify its inputs as either real(coming from the training dataset) or fake (coming from the samples of G). This competition leadsto a minmax formulation with a value function:minGmaxDExpdata (x)[log(D(x;D))] + E zpz(z)[log(1D(G(z;G);D))]; (1)where zis a random vector from a standard multivariate Gaussian or a uniform distribution pz(z),G(z;G)mapszto the data space, D(x)is the probability that xis real estimated by D. Theadvantage of the GANs formulation is that it lacks an explicit loss function and instead uses thediscriminator to optimize the generative model. The discriminator, in turn, only cares whether thesample it receives is on the data manifold, and not whether it exactly matches a particular trainingexample (as opposed to losses such as MSE). Hence, the discriminator provides a gradient signalonly when the generated samples do not lie on the data manifold so that the generator can readjustits parameters accordingly. This form of training enables learning the data manifold of the trainingset and not just optimizing to reconstruct the dataset, as in autoencoder and its variants.While the GANs framework is largely agnostic to the choice of GandD, it is clear that generativemodels with the ‘right’ inductive biases will be more effective in learning from the gradient infor-mation (Denton et al., 2015; Im et al., 2016; Gregor et al., 2015; Reed et al., 2016a; Yan et al., 2015).With this motivation, we propose a generator that models image generation via a recurrent process– in each time step of the recurrence, an object with its own appearance and shape is generated andwarped according to a generated pose to compose an image in layers.3.2 L AYERED STRUCTURE OF IMAGEAn image taken of our 3D world typically contains a layered structure. One way of representing animage layer is by its appearance and shape. As an example, an image xwith two layers, foregroundfand background bmay be factorized as:x=fm+b(1m); (2)3Published as a conference paper at ICLR 2017where mis the mask depicting the shapes of image layers, and the element wise multiplicationoperator. Some existing methods assume the access to the shape of the object either during training(Isola & Liu, 2013) or both at train and test time (Reed et al., 2016a; Yan et al., 2015). Representingimages in layered structure is even straightforward for video with moving objects (Darrell & Pent-land, 1991; Wang & Adelson, 1994; Kannan et al., 2005). V ondrick et al. (2016) generates videosby separately generating a fixed background and moving foregrounds. A similar way of generatingsingle image can be found in Kwak & Zhang (2016).Another way is modeling the layered structure with object appearance and pose as:x=ST(f;a) +b; (3)where fandbare foreground and background, respectively; ais the affine transformation; STisthe spatial transformation operator. Several works fall into this group (Roux et al., 2011; Huang &Murphy, 2015; Eslami et al., 2016). In Huang & Murphy (2015), images are decomposed into layersof objects with specific poses in a variational autoencoder framework, while the number of objects(i.e., layers) is adaptively estimated in Eslami et al. (2016).To contrast with these works, LR-GAN uses a layered composition, and the foreground layers si-multaneously model all three dominant factors of variation: appearance f, shape mand pose a. Wewill elaborate it in the following section.4 L AYERED RECURSIVE GAN (LR-GAN)The basic structure of LR-GAN is similar to GAN: it consists of a discriminator and a generator thatare simultaneously trained using the minmax formulation of GAN, as described in x.3.1. The keyinnovation of our work is the layered recursive generator, which is what we describe in this section.The generator in LR-GAN is recursive in that the image is constructed recursively using a recurrentnetwork. Layered in that each recursive step composes an object layer that is ‘pasted’ on the imagegenerated so far. Object layer at timestep tis parameterized by the following three constituents –‘canonical’ appearance ft, shape (or mask) mt, and pose (or affine transformation) atfor warpingthe object before pasting in the image composition.Fig. 2 shows the architecture of the LR-GAN with the generator architecture unrolled for generatingbackground x0(:=xb) and foreground x1andx2. At each time step t, the generator composes thenext image xtvia the following recursive computation:xt=ST(mt;at)|{z}affine transformed maskST(ft;at)|{z}affine transformed appearance+ (1ST(mt;at))xt1|{z}pasting on image composed so far;8t2[1;T](4)whereST(;at)is a spatial transformation operator that outputs the affine transformed version ofwithatindicating parameters of the affine transformation.Since our proposed model has an explicit transformation variable atthat is used to warp the object,it can learn a canonical object representation that can be re-used to generate scenes where the ob-ject occurs as mere transformations of it, such as different scales or rotations. By factorizing theappearance, shape and pose, the object generator can focus on separately capturing regularities inthese three factors that constitute an object. We will demonstrate in our experiments that removingthese factorizations from the model leads to its spending capacity in variability that may not solelybe about the object in Section 5.5 and 5.6.4.1 D ETAILS OF GENERATOR ARCHITECTUREFig. 2 shows our LR-GAN architecture in detail – we use different shapes to indicate different kindsof layers (convolutional, fractional convolutional, (non)linear, etc), as indicated by the legend. Ourmodel consists of two main pieces – a background generator Gband a foreground generator Gf.GbandGfdo not share parameters with each other. Gbcomputation happens only once, while Gfisrecurrent over time, i.e., all object generators share the same parameters. In the following, we willintroduce each module and connections between them.Temporal Connections . LR-GAN has two kinds of temporal connections – informally speaking,one on ‘top’ and one on ‘bottom’. The ‘top’ connections perform the act of sequentially ‘pasting’4Published as a conference paper at ICLR 2017G"#G$LSTMLSTMG"%G"&G'T"G"#LSTMG"%G"&G'T"DP"#E"#E",CCSSFractionalConvolutionalLayersConvolutionalLayers(Non)linearembeddinglayersandothersx$f12f2m2m42m45f15f5m5SpatialSamplerCompositorx2x6Realsamplez8z2z5Figure 2: LR-GAN architecture unfolded to three timesteps. It mainly consists of one backgroundgenerator, one foreground generator, temporal connections and one discriminator. The meaning ofeach component is explained in the legend.object layers (Eqn. 4). The ‘bottom’ connections are constructed by a LSTM on the noise vectorsz0;z1;z2. Intuitively, this noise-vector-LSTM provides information to the foreground generatorabout what else has been generated in past. Besides, when generating multiple objects, we use apooling layer Pcfand a fully-connected layer Ecfto extract the information from previous generatedobject response map. By this way, the model is able to ‘see’ previously generated objects.Background Generator . The background generator Gbis purposely kept simple. It takes the hiddenstate of noise-vector-LSTM h0las the input and passes it to a number of fractional convolutionallayers (also called ‘deconvolution’ layer in some papers) to generate images at its end. The outputof background generator xbwill be used as the canvas for the following generated foregrounds.Foreground Generator . The foreground generator Gfis used to generate an object with appearanceand shape. Correspondingly, Gfconsists of three sub-modules, Gcf, which is a common ‘trunk’whose outputs are shared by GifandGmf.Gifis used to generate the foreground appearance ft,whileGmfgenerates the mask mtfor the foreground. All three sub-modules consists of one ormore fractional convolutional layers combined with batch-normalization and nonlinear layers. Thegenerated foreground appearance and mask have the same spatial size as the background. The topofGmfis a sigmoid layer in order to generate one channel mask whose values range in (0;1).Spatial Transformer . To spatially transform foreground objects, we need to estimate the trans-formation matrix. As in Jaderberg et al. (2015), we predict the affine transformation matrix with alinear layerTfthat has six-dimensional outputs. Then based on the predicted transformation matrix,we use a grid generator Ggto generate the corresponding sampling coordinates in the input for eachlocation at the output. The generated foreground appearance and mask share the same transforma-tion matrix, and thus the same sampling grid. Given the grid, the sampler Swill simultaneouslysample the ftandmtto obtain ^ftand^mt, respectively. Different from Jaderberg et al. (2015),our sampler here normally performs downsampling, since the the foreground typically has smallersize than the background. Pixels in ^ftand^mtthat are from outside the extent of ftandmtare setto zero. Finally, ^ftand^mtare sent to the compositor Cwhich combines the canvas xt1and^ftthrough layered composition with blending weights given by ^mt(Eqn. 4).Pseudo-code for our approach and detailed model configuration are provided in the Appendix.5Published as a conference paper at ICLR 20174.2 N EWEVALUATION METRICSSeveral metrics have been proposed to evaluate GANs, such as Gaussian parzen window (Good-fellow et al., 2014), Generative Adversarial Metric (GAM) (Im et al., 2016) and Inception Score(Salimans et al., 2016). The common goal is to measure the similarity between the generated datadistributionPg(x) =G(z;z)and the real data distribution P(x). Most recently, Inception Scorehas been used in several works (Salimans et al., 2016; Zhao et al., 2016). However, it is an assymetricmetric and could be easily fooled by generating centers of data modes.In addition to these metrics, we present two new metrics based on the following intuition – a suf-ficient (but not necessary) condition for closeness of Pg(x)andP(x)is closeness of Pg(xjy)andP(xjy), i.e., distributions of generated data and real data conditioned on all possible variables ofinteresty, e.g., category label. One way to obtain this variable of interest yis via human annotation.Specifically, given the data sampled from Pg(x)andP(x), we ask people to label the category of thesamples according to some rules. Note that such human annotation is often easier than comparingsamples from the two distributions (e.g., because there is no 1:1 correspondence between samplesto conduct forced-choice tests).After the annotations, we need to verify whether the two distributions are similar in each category.Clearly, directly comparing the distributions Pg(xjy)andP(xjy)may be as difficult as compar-ingPg(x)andP(x). Fortunately, we can use Bayes rule and alternatively compare Pg(yjx)andP(yjx), which is a much easier task. In this case, we can simply train a discriminative model onthe samples from Pg(x)andP(x)together with the human annotations about categories of thesesamples. With a slight abuse of notation, we use Pg(yjx)andP(yjx)to denote probability outputsfrom these two classifiers (trained on generated samples vs trained on real samples). We can thenuse these two classifiers to compute the following two evaluation metrics:Adversarial Accuracy: Computes the classification accuracies achieved by these two classifiers ona validation set, which can be the training set or another set of real images sampled from P(x). IfPg(x)is close toP(x), we expect to see similar accuracies.Adversarial Divergence: Computes the KL divergence between Pg(yjx)andP(yjx). The lowerthe adversarial divergence, the closer two distributions are. The low bound for this metric is exactlyzero, which means Pg(yjx) =P(yjx)for all samples in the validation set.As discussed above, we need human efforts to label the real and generated samples. Fortunately, wecan further simplify this. Based on the labels given on training data, we split the training data intocategories, and train one generator for each category. With all these generators, we generate samplesof all categories. This strategy will be used in our experiments on the datasets with labels given.5 E XPERIMENTWe conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (LeCun et al.,1998); 2) CIFAR-10 (Krizhevsky & Hinton, 2009); 3) CUB-200 (Welinder et al., 2010). To addvariability to the MNIST images, we randomly scale (factor of 0.8 to 1.2) and rotate ( 4to4) thedigits and then stitch them to 4848uniform backgrounds with random grayscale value between[0, 200]. Images are then rescaled back to 3232. Each image thus has a different backgroundgrayscale value and a different transformed digit as foreground. We rename this sythensized datasetasMNIST-ONE (single digit on a gray background). We also synthesize a dataset MNIST-TWOcontaining two digits on a grayscale background. We randomly select two images of digits andperform similar transformations as described above, and put one on the left and the other on theright side of a 7878gray background. We resize the whole image to 6464.We develop LR-GAN based on open source code1. We assume the number of objects is known.Therefore, for MNIST-ONE, MNIST-TWO, CIFAR-10, and CUB-200, our model has two, three,two, and two timesteps, respectively. Since the size of foreground object should be smaller thanthat of canvas, we set the minimal allowed scale2in affine transforamtion to be 1.2 for all datasetsexcept for MNIST-TWO, which is set to 2 (objects are smaller in MNIST-TWO). In LR-GAN, the1https://github.com/soumith/dcgan.torch2Scale corresponds to the size of the target canvas with respect to the object – the larger the scale, the largerthe canvas, and the smaller the relative size of the object in the canvas. 1 means the same size as the canvas.6Published as a conference paper at ICLR 2017Figure 3: Generated images on CIFAR-10 based on our model.Figure 4: Generated images on CUB-200 based on our model.background generator and foreground generator have similar architectures. One difference is thatthe number of channels in the background generator is half of the one in the foreground generator.We compare our results to that of DCGAN (Radford et al., 2015). Note that LR-GAN withoutLSTM at the first timestep corresponds exactly to the DCGAN. This allows us to run controlledexperiments. In both generator and discriminator, all the (fractional) convolutional layers have 44filter size with stride 2. As a result, the number of layers in the generator and discriminatorautomatically adapt to the size of training images. Please see the Appendix (Section 6.2) for detailsabout the configurations. We use three metrics for quantitative evaluation, including Inception Score(Salimans et al., 2016) and the proposed Adversarial Accuracy, Adversarial Divergence. Note thatwe report two versions of Inception Score. One is based on the pre-trained Inception net, and theother one is based on the pre-trained classifier on the target datasets.5.1 Q UALITATIVE RESULTSIn Fig. 3 and 4, we show the generated samples for CIFAR-10 and CUB-200, respectively. MNISTresults are shown in the next subsection. As we can see from the images, the compositional natureof our model results in the images being free of blending artifacts between backgrounds and fore-grounds. For CIFAR-10, we can see the horses and cars with clear shapes. For CUB-200, the birdshapes tend to be even sharper.5.2 MNIST-ONE AND MNIST-TWOWe now report the results on MNIST-ONE and MNIST-TWO. Fig. 5 shows the generation results ofour model on MNIST-ONE. As we can see, our model generates the background and the foregroundin separate timestep, and can disentagle the foreground digits from background nearly perfectly.Though initial values of the mask randomly distribute in the range of (0, 1), after training, the masksare nearly binary and accurately carve out the digits from the generated foreground. More results onMNIST-ONE (including human studies) can be found in the Appendix (Section 6.3).Fig. 6 shows the generation results for MNIST-TWO. Similarly, the model is also able to generatebackground and the two foreground objects separately. The foreground generator tends to generatea single digit at each timestep. Meanwhile, it captures the context information from the previoustime steps. When the first digit is placed to the left side, the second one tends to be placed on theright side, and vice versa.7Published as a conference paper at ICLR 2017Figure 5: Generation results of our model on MNIST-ONE. From left to right, the image blocks arereal images, generated background images, generated foreground images, generated masks and finalcomposite images, respectively.Figure 6: Generation results of our model on MNIST-TWO. From top left to bottom right (rowmajor), the image blocks are real images, generated background images, foreground images andmasks at the second timestep, composite images at the second time step, generated foregroundimages and masks at the third timestep and the final composite images, respectively.5.3 CUB-200We study the effectiveness of our model trained on the CUB-200 bird dataset. In Fig. 1, we haveshown a random set of generated images, along with the intermediate generation results of the model.While being completely unsupervised , the model, for a large fraction of the samples, is able toFigure 7: Matched pairs of generated images based on DCGAN and LR-GAN, respectivedly. Theodd columns are generated by DCGAN, and the even columns are generated by LR-GAN. Theseare paired according to the perfect matching based on Hungarian algorithm.8Published as a conference paper at ICLR 2017Figure 8: Qualitative comparison on CIFAR-10. Top three rows are images generated by DCGAN;Bottom three rows are by LR-GAN. From left to right, the blocks display generated images withincreasing quality level as determined by human studies.successfully disentangle the foreground and the background. This is evident from the generatedbird-like masks.We do a comparative study based on Amazon Mechanical Turk (AMT) between DCGAN and LR-GAN to quantify relative visual quality of the generated images. We first generated 1000 samplesfrom both the models. Then, we performed perfect matching between the two image sets usingthe Hungarian algorithm on L2norm distance in the pixel space. This resulted in 1000 imagepairs. Some examplar pairs are shown in Fig. 7. For each image pair, 9 judges are asked to choosethe one that is more realistic. Based on majority voting, we find that our generated images areselected 68.4% times, compared with 31.6% times for DCGAN. This demonstrates that our modelhas generated more realistic images than DCGAN. We can attribute this difference to our model’sability to generate foreground separately from the background, enabling stronger edge cues.5.4 CIFAR-10We now qualitatively and quantitatively evaluate our model on CIFAR-10, which contains multipleobject categories and also various backgrounds.Comparison of image generation quality: We conduct AMT studies to compare the fidelity ofimage generation. Towards this goal, we generate 1000 images from DCGAN and LR-GAN, re-spectively. We ask 5 judges to label each image to either belong to one of the 10 categories or as‘non recognizable’ or ‘recognizable but not belonging to the listed categories’. We then assign eachimage a quality level between [0,5] that captures the number of judges that agree with the majoritychoice. Fig. 8 shows the images generated by both approaches, ordered by increasing quality level.We merge images at quality level 0 (all judges said non-recognizable) and 1 together, and similarlyimages at level 4 and 5. Visually, the generated samples by our model have clearer boundaries andobject structures. We also computed the fraction of non-recognizable images: Our model had a 10%absolute drop in non-recognizability rate (67.3% for ours vs. 77.7% for DCGAN). For reference,11.4% of real CIFAR images were categorized as non-recognizable. Fig. 9 shows more generated(intermediate) results of our model.Quantitative evaluation on generators: We evaluate the generators based on three metrics: 1)Inception Score; 2) Adversarial Accuracy; 3) Adversarial Divergence. To obtain a classifier modelfor evaluation, we remove the top layer in the discriminator used in our model, and then appendtwo fully connected layers on the top of it. We train this classifier using the training samples ofCIFAR-10 based on the annotations. Following Salimans et al. (2016), we generated 50,000 imagesTable 1: Quantitative comparison between DCGAN and LR-GAN on CIFAR-10.Training Data Real Images DCGAN OursInception Scorey11.180.18 6.64 0.14 7.17 0.07Inception Scoreyy7.230.09 5.69 0.07 6.11 0.06Adversarial Accuracy 83.33 0.08 37.81 0.02 44.22 0.08Adversarial Divergence 0 7.58 0.04 5.57 0.06yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.9Published as a conference paper at ICLR 2017Figure 9: Generation results of our model on CIFAR-10. From left to right, the blocks are: gener-ated background images, foreground images, foreground masks, foreground images carved out bymasks, carved foregrounds after spatial transformation, final composite images and nearest neighbortraining images to the generated images.Figure 10: Category specific generation results of our model on CIFAR-10 categories of horse, frog,and cat (top to bottom). The blocks from left to right are: generated background images, foregroundimages, foreground masks, foreground images carved out by masks, carved foregrounds after spatialtransformation and final composite images.based on DCGAN and LR-GAN, repsectively. We compute two types of Inception Scores. Thestandard Inception Score is based on the Inception net as in Salimans et al. (2016), and the contex-tual Inception Score is based on our trained classifier model. To distinguish, we denote the standardone as ‘Inception Scorey’, and the contextual one as ‘Inception Scoreyy’. To obtain the AdversarialAccuracy and Adversarial Divergence scores, we train one generator on each of 10 categories forDCGAN and LR-GAN, respectively. Then, we use these generators to generate samples of differentcategories. Given these generated samples, we train the classifiers for DCGAN and LR-GAN sepa-rately. Along with the classifier trained on the real samples, we compute the Adversarial Accuracy10Published as a conference paper at ICLR 2017and Adversarial Divergence on the real training samples. In Table 1, we report the Inception Scores,Adversarial Accuracy and Adversarial Divergence for comparison. We can see that our model out-performs DCGAN across the board. To point out, we obtan different Inception Scores based ondifferent classifier models, which indicates that the Inception Score varies with different models.Quantitative evaluation on discriminators: We evaluate the discriminator as an extractor for deeprepresentations. Specifically, we use the output of the last convolutional layer in the discriminatoras features. We perform a 1-NN classification on the test set given the full training set. Cosinesimilarity is used as the metric. On the test set, our model achieves 62.09% 0.01% compared toDCGAN’s 56.05% 0.02%.Contextual generation: We also show the efficacy of our approach to generate diverse foregroundsconditioned on fixed background. The results in Fig. 17 in Appendix showcase that the foregroundgenerator generates objects that are compatible with the background. This indicates that the modelhas captured contextual dependencies between the image layers.Category specific models: The objects in CIFAR-10 exhibit huge variability in shapes. That canpartly explain why some of the generated shapes are not as compelling in Fig. 9. To test this hy-pothesis, we reuse the generators trained for each of 10 categories used in our metrics to obtain thegeneration results. Fig. 10 shows results for categories ‘horse’, ‘frog’ and ‘cat’. We can see that themodel is now able to generate object-specific appearances and shapes, similar in vein to our resultson the CUB-200 dataset.5.5 I MPORTANCE OF TRANSFORMATIONSFigure 11: Generation results from an ablated LR-GAN model without affine transformations. Fromtop to bottom, the block rows correspond to different datasets: MNIST-ONE, CUB-200, CIFAR-10.From left to right, the blocks show generated background images, foreground images, foregroundmasks, and final composite images. For comparison, the rightmost column block shows final gener-ated images from a non-ablated model with affine transformations.Fig. 11 shows results from an ablated model without affine transformations in the foreground layers,and compares the results with the full model that does include these transformations. We note thatone significant problem emerges that the decompositions are degenerate, in the sense that the modelis unable to break the symmetry between foreground and background layers, often generating objectappearances in the model’s background layer and vice versa. For CUB-200, the final generated im-ages have some blendings between foregrounds and backgrounds. This is particularly the case for11Published as a conference paper at ICLR 2017Figure 12: Generation results from an ablated LR-GAN model without mask generator. The blockrows correspond to different datasets (from top to bottom: MNIST-ONE, CUB-200, CIFAR-10).From left to right, the blocks show generated background images, foreground images, transformedforeground images, and final composite images. For comparison, the rightmost column block showsfinal generated images from a non-ablated model with mask generator.those images without bird-shape masks. For CIFAR-10, a number of generated masks are inverted.In this case, the background images are carved out as the foreground objects. The foreground gener-ator takes almost all the duty to generate the final images, which make it harder to generate imagesas clear as the model with transformation. From these comparisons, we qualitatively demonstratethe importance of modeling transformations in the foreground generation process. Another merit ofusing transformation is that the intermediate outputs of the model are more interpretable and faciliateto the downstreaming tasks, such as scene paring, which is demonstrated in Section 6.8.5.6 I MPORTANCE OF SHAPESWe perform another ablation study by removing the mask generator to understand the importanceof modeling object shapes. In this case, the generated foreground is simply pasted on top of thegenerated background after being transformed. There is no alpha blending between the foregroundsand backgrounds. The generation results for three datasets, MNIST-ONE, CUB-200, CIFAR-10 areshown in Fig. 12. As we can see, though the model works well for the generation of MNIST-ONE, itfails to generate reasonable images across the other datasets. Particularly, the training does not evenconverge for CUB-200. Based on these results, we qualitatively demonstrate that mask generator inour model is fairly important to obtain plausible results, especially for realistic images.REFERENCESXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-gan: Interpretable representation learning by information maximizing generative adversarial nets.arXiv preprint arXiv:1606.03657 , 2016.Trevor Darrell and Alex Pentland. Robust estimation of a multi-layered motion representation. IEEEWorkshop on Visual Motion , 1991.12Published as a conference paper at ICLR 2017Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using alaplacian pyramid of adversarial networks. In Advances in neural information processing systems ,pp. 1486–1494, 2015.S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geof-frey E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. CoRR ,abs/1603.08575, 2016.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-mation Processing Systems , pp. 2672–2680, 2014.Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: Arecurrent neural network for image generation. arXiv preprint arXiv:1502.046239 , 2015.Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild:A database for studying face recognition in unconstrained environments. Technical Report 07-49,University of Massachusetts, Amherst, October 2007.Jonathan Huang and Kevin Murphy. Efficient inference in occlusion-aware generative models ofimages. CoRR , abs/1511.06362, 2015.Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating imageswith recurrent adversarial networks. arXiv preprint arXiv:1602.05110 , 2016.Phillip Isola and Ce Liu. Scene collaging: Analysis and synthesis of natural images with semanticlayers. In IEEE International Conference on Computer Vision , pp. 3048–3055, 2013.Max Jaderberg, Karen Simonyan, Andrew Zisserman, and koray kavukcuoglu. Spatial transformernetworks. In Advances in Neural Information Processing Systems 28 , pp. 2017–2025, 2015.Anitha Kannan, Nebojsa Jojic, and Brendan Frey. Generative model for layers of appearance anddeformation. AISTATS , 2005.Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverseautoregressive flow. arXiv preprint arXiv:1606.04934 , 2016.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hanock Kwak and Byoung-Tak Zhang. Generating images part by part with composite generativeadversarial networks. arXiv preprint arXiv:1607.05387 , 2016.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating imagesfrom captions with attention. arXiv preprint arXiv:1511.02793 , 2015.Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of com-plex wavelet coefficients. International journal of computer vision , 40(1):49–70, 2000.Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deepconvolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 , 2015.Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn-ing what and where to draw. arXiv preprint arXiv:1610.02454 , 2016a.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 , 2016b.Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model ofimages by factoring appearance and shape. Neural Computation , 23:593–650, 2011.Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.Improved techniques for training gans. arXiv preprint arXiv:1606.03498 , 2016.13Published as a conference paper at ICLR 2017A ̈aron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.CoRR , abs/1601.06759, 2016.Carl V ondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics.arXiv preprint arXiv:1609.02612 , 2016.John Wang and Edward Adelson. Representing moving images with layers. IEEE Transactions onImage Processing , 1994.Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar-ial networks. arXiv preprint arXiv:1603.05631 , 2016.P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSDBirds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional imagegeneration from visual attributes. CoRR , abs/1512.00570, 2015.Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network.arXiv preprint arXiv:1609.03126 , 2016.Jun-Yan Zhu, Philipp Kr ̈ahenb ̈uhl, Eli Shechtman, and Alexei A Efros. Generative visual manipu-lation on the natural image manifold. In European Conference on Computer Vision , pp. 597–613.Springer, 2016.6 A PPENDIX6.1 A LGORITHMAlgo. 1 illustrates the generative process in our model. g(?)evaluates the function gat?.is acomposition operator that composes its operands so that fg(?) =f(g(?)).Algorithm 1 Stochastic Layered Recursive Image Generation1:z0N(0;I)2:x0=Gb(z0) .background generator3:h0l 04:c0l 05:fort2[1T]do6: ztN(0;I)7: htl,ctl LSTM([ zt,ht1l,ct1l]) .pass through LSTM8: ift = 1 then9: yt htl10: else11: yt Elf([htlht1f]) .pass through non-linear embedding layers Elf12: end if13: st Gcf(yt) .predict shared cube for GifandGmf14:at Tf(yt) .object transformation15: ft Gif(st) .generate object appearance16: mt Gmf(st) .generate object shape17: htf EcfPcf(st) .predict shared represenation embedding18: xt ST(mt;at)ST(ft;at) + (1ST(mt;at))xt119:end for6.2 M ODEL CONFIGURATIONSTable 2 lists the information and model configuration for different datasets. The dimensions ofrandom vectors and hidden vectors are all set to 100. We also compare the number of parameters inDCGAN and LR-GAN. The numbers before ‘/’ are our model, after ‘/’ are DCGAN. Based on thesame notation used in (Zhao et al., 2016), the architectures for the different datasets are:14Published as a conference paper at ICLR 2017Table 2: Information and model configurations on different datasets.Dataset MNIST-ONE MNIST-TWO CIFAR-10 CUB-200Image Size 32 64 32 64#Images 60,000 60,000 50,000 5,994#Timesteps 2 3 2 2#Parameters 5.25M/4.11M 7.53M/6.33M 5.26M/4.11M 27.3M/6.34MMNIST-ONE: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1MNIST-TWO: Gb: (256)4c-(128)4c2s-(64)4c2s-(32)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s-(512)4p4s-1CUB-200: Gb: (512)4c-(256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s; Gif: (3)4c2s; Gmf: (1)4c2s;D: (128)4c2s-(256)4c2s-(512)4c2s-(1024)4c2s-(1024)4p4s-1CIFAR-10: Gb: (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; Gcf: (512)4c-(256)4c2s-(128)4c2s;Gif: (3)4c2s; Gmf: (1)4c2sD: (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-16.3 R ESULTS ON MNIST-ONEWe conduct human studies on generation results on MNIST-ONE. Specifically, we generate 1,000images using both LR-GAN and DCGAN. As references, we also include 1000 real images. Thenwe ask the users on AMT to label each image to be one of the digits (0-9). We also provide theman option ‘non recognizable’ in case the generated image does not seem to contain a digit. Eachimage was judged by 5 unique workers. Similar to CIFAR-10, if an image is recognized to be thesame digit by all 5 users, it is assigned to quality level 5. If it is not recognizable according to allusers, it is assigned to quality level 0. Fig. 13 (left) shows the number of images assigned to all sixquality levels. Compared to DCGAN, our model generated more samples with high quality levels.As expected, the real images have many samples with high quality levels. In Fig. 13 (right), we showthe number of images that are recognized to each digit category (0-9). For qualitative comparison,we show examplar images at each quality level in Fig. 14. From left to right, the quality levelincreases from 0 to 5. As expected, the images with higher quality level are more clear.For quantitative evaluation, we use the same way as for CIFAR-10. The classifier model used forcontextual Inception Score is trained based on the training set. We generate 60,000 samples basedon DCGAN and LR-GAN for evaluation, respectively. To obtain the Adversarial Accuracy andAdversarial Divergence, we first train 10 generators for 10 digit categories separately, and then usethe generated samples to train the classifier. As shown in Table 3, our model has higher scores thanDCGAN on both standard and contextual Inception Score. Also, our model has a slightly higherFigure 13: Statistics of annotations in human studies on MNIST-ONE. Left: distribution of qualitylevel; Right: distribution of recognized digit categories.15Published as a conference paper at ICLR 2017Figure 14: Qualitative comparison on MNIST-ONE. Top three rows are samples generated by DC-GAN. Bottom three rows are samples generated by LR-GAN. The quality level increases from leftto right as determined via human studies.Table 3: Quantitative comparison on MNIST-ONE.Training Data Real Images DCGAN OursInception Scorey1.830.01 2.03 0.01 2.06 0.01Inception Scoreyy9.150.04 6.42 0.03 7.15 0.04Adversarial Accuracy 95.22 0.25 26.12 0.07 26.61 0.06Adversarial Divergence Score 0 8.47 0.03 8.39 0.04yEvaluate using the pre-trained Inception net as Salimans et al. (2016)yyEvaluate using the supervisedly trained classifier based on the discriminator in LR-GAN.adversarial accuracy, and lower adversarial divergence than DCGAN. We find that the all threeimage sets have low standard Inception Scores. This is mainly because the Inception net is trainedon ImageNet, which has a very different data distribution from the MNIST dataset. Based on this,we argue that the standard Inception Score is not suitable for some image datasets.6.4 M ORE RESULTS ON CUB-200In this experiment, we reduce the minimal allowed object scale to 1.1, which allows the model togenerate larger foreground objects. The results are shown in Fig. 15. Similar to the results when theconstraint is 1.2, the crisp bird-like masks are generated automatically by our model.Figure 15: Generation results of our model on CUB-200 when setting minimal allowed scale to1.1. From left to right, the blocks show the generated background images, foreground images,foreground masks, foreground images carved out by masks, carved foreground images after spatialtransformation. The sixth and seventh blocks are final composite images and the nearest neighborreal images.16Published as a conference paper at ICLR 20176.5 M ORE RESULTS ON CIFAR-106.5.1 Q UALITATIVE RESULTSIn Fig. 16, we show more results on CIFAR-10 when setting minimal allowed object scale to 1.1.The rightmost column block also shows the training images that are closest to the generated images(cosine similarity in pixel space). We can see our model does not memorize the training data.Figure 16: Generation results of our model on CIFAR-10 with minimal allowed scale be 1.1, Fromleft to right, the layout is same to Fig. 15.6.5.2 W ALKING IN THE LATENT SPACESimilar to DCGAN, we also show results by walking in the latent space. Note that our model hastwo or more inputs. So we can walk along any of them or their combination. In Fig. 17, we generatemultiple foregrounds for the same fixed generated background. We find that our model consistentlygenerates contextually compatible foregrounds. For example, for the grass-like backgrounds, theforeground generator generates horses and deer, and airplane-like objects for the blue sky.6.5.3 W ORD CLOUD BASED ON HUMAN STUDYAs we mentioned above, we conducted human studies on CIFAR-10. Besides asking people to selecta name from a list for an image, we also conducted another human study where we ask people to useone word (free-form) to describe the main object in the image. Each image was ‘named’ by 5 uniquepeople. We generate word clouds for real images, images generated by DCGAN and LR-GAN, asshown in Fig. 18.6.6 R ESULTS ON LFW FACE DATASETWe conduct experiment on face images in LFW dataset (Huang et al., 2007). Different from previousworks which work on cropped and aligned faces, we directly generate the original images whichcontains a large portion of backgrounds. This configuration helps to verify the efficiency of LR-GANto model the object appearance, shape and pose. In Fig. 19, we show the (intermediate) generationresults of LR-GAN. Surprisingly, without any supervisions, the model generated background andfaces in separate steps, and the generated masks accurately depict face shapes. Moreover, the model17Published as a conference paper at ICLR 2017Figure 17: Walking in the latent foreground space by fixing backgrounds in our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foregroundmasks, foreground images carved out by masks, carved out foreground images after spatial transfor-mation, and final composite images. Each row has the same background, but different foregrounds.Figure 18: Statistics of annotations in human studies on CIFAR-10. Left to right: word cloud forreal images, images generated by DCGAN, images generated by LR-GAN.Figure 19: Generation results of our model on LFW. From left to right, the blocks are: generatedbackground images, foreground images, foreground masks, carved out foreground images after spa-tial transformation, and final composite images.18Published as a conference paper at ICLR 2017learns where to place the generated faces so that the whole image looks natural. For comparison,please refer to (Kwak & Zhang, 2016) which does not model the transformation. We can find thegeneration results degrade much.6.7 S TATISTICS ON TRANSFORMATION MATRICESIn this part, we analyze the statistics on the transformation matrices generated by our model fordifferent datasets, including MNIST-ONE, CUB-200, CIFAR-10 and LFW. We used affine transfor-mation in our model. So there are 6 parameters, scaling in the x coordinate ( sx), scaling in the ycoordinate (sy), translation in the x coordinate ( tx), translation in the y coordinate ( ty), rotation inthe x coordinate ( rx) and rotation in the y coordinate ( ry). In Fig. 20, we show the histograms on dif-ferent parameters for different datasets.These histograms show that the model produces non-trivialvaried scaling, translation and rotation on all datasets. For different datasets, the learned transfor-mation have different patterns. We hypothesize that this is mainly determined by the configurationsof objects in the images. For example, on MNIST-ONE, all six parameters have some fluctuationssince the synthetic dataset contains digits randomly placed at different locations. For the other threedatasets, the scalings converge to single value since the object sizes do not vary much, and the vari-ations on rotation and translation suffice to generate realistic images. Specifically, we can find thegenerator largely relies on the translation on x coordinate for generating CUB-200. This makessense since birds in the images have similar scales, orientations but various horizontal locations. ForCIFAR-10, since there are 10 different object categories, the configurations are more diverse, hencethe generator uses all parameters for generation except for the scaling. For LFW, since faces havesimilar configurations, the learned transformations have less fluctuation as well. As a result, we cansee that LR-GAN indeed models the transformations on the foreground to generate images.6.8 C ONDITIONAL IMAGE GENERATIONConsidering our model can generate object-like masks (shapes) for images, we conducted an ex-periment to evaluate whether our model can be potentially used for image segmentation and objectdetection. We make some changes to the model. For the background generator, the input is a realimage instead of a random vector. Then the image is passed through an encoder to extract the hid-den features, which replaces the random vector z0and are fed to the background generator. For theforeground generator, we subtract the image generated by the background generator from the inputimage to obtain a residual image. Then this residual image is fed to the same encoder to get thehidden features, which are used as the input for foreground generator. In our conditional model,we want to reconstruct the image, so we add a reconstruction loss along with the adversarial loss.We train this conditional model on CIFAR-10. The (intermediate) outputs of the model is shownin Fig. 21. Interestingly, the model successfully learned to decompose the input images into back-ground and foreground. The background generator tends to do an image inpainting by generating acomplete background without object, while the foreground generator works as a segmentation modelto get object mask from the input image.Similarly, we also run the conditional LR-GAN on LFW dataset. As we can see in Fig. 22, the fore-ground generator automatically and consistently learned to generate the face regions, even thoughthere are large portion of background in the input images. In other words, the conditional LR-GANsuccessfully learned to detection faces in images. We suspect this success is due to that it has lowcost for the generator to generate similar images, and thus converge to the case that the first generatorgenerate background, and the second generator generate face images.Based on these experiments, we argue that our model can be possibly used for image segmentationand object detection in a generative and unsupervised manner. One future work would be verifyingthis by applying it to high-resolution and more complicate datasets.19Published as a conference paper at ICLR 2017Figure 20: Histograms of transformation parameters learnt in our model for different datasets. Fromleft to right, the datasets are: MNIST-ONE, CUB-200, CIFAR-10 and LFW. From top to bottom,they are scaling sx,sy, translation tx,ty, and rotation rx,ryinxandycoordinate, respectively.20Published as a conference paper at ICLR 2017Figure 21: Conditional generation results of our model on CIFAR-10. From left to right, the blocksare: real images, generated background images, foreground images, foreground masks, foregroundimages carved out by masks, carved foreground images after spatial transformation, and final com-posite (reconstructed) images.Figure 22: Conditional generation results of our model on LFW, displayed with the same layout toFig. 21.21
S1Q8CV7Nl
rJxDkvqee
ICLR.cc/2017/conference/-/paper347/official/review
{"title": "", "rating": "6: Marginally above acceptance threshold", "review": "This paper proposes an approach to learning word vector representations for character sequences and acoustic spans jointly. The paper is clearly written and both the approach and experiments seem reasonable in terms of execution. The motivation and tasks feel a bit synthetic as it requires acoustics spans for words that have already been segmented from continuous speech - - a major assumption. The evaluation tasks feel a bit synthetic overall and in particular when evaluating character based comparisons it seems there should also be phoneme based comparisons.\n\nThere's a lot of discussion of character edit distance relative to acoustic span similarity. It seems very natural to also include phoneme string edit distance in this discussion and experiments. This is especially true of the word similarity test. Rather than only looking at levenshtein edit distance of characters you should evaluate edit distance of the phone strings relative to the acoustic embedding distances. Beyond the evaluation task the paper would be more interesting if you compared character embeddings with phone string embeddings. I believe the last function could remain identical it's just swapping out characters for phones as the symbol set. finally in this topic the discussion and experiments should look at homophones As if not obvious what the network would learn to handle these.\n\n the vocabulary size and training data amount make this really a toy problem. although there are many pairs constructed most of those pairs will be easy distinctions. the experiments and conclusions would be far stronger with a larger vocabulary and word segment data set with subsampling all pairs perhaps biased towards more difficult or similar pairs.\n\n it seems this approach is unable to address the task of keyword spotting in longer spoken utterances. If that's the case please add some discussion as to why you are solving the problem of word embeddings given existing word segmentations. The motivating example of using this approach to retrieve words seems flawed if a recognizer must be used to segment words beforehand ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-view Recurrent Neural Acoustic Word Embeddings
["Wanjia He", "Weiran Wang", "Karen Livescu"]
Recent work has begun exploring neural acoustic word embeddings–fixed dimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.
["acoustic sequences", "examples", "acoustic word embeddings", "word discrimination", "neural acoustic word", "dimensional vector representations", "speech segments", "words"]
https://openreview.net/forum?id=rJxDkvqee
https://openreview.net/pdf?id=rJxDkvqee
https://openreview.net/forum?id=rJxDkvqee&noteId=S1Q8CV7Nl
Published as a conference paper at ICLR 2017MULTI-VIEW RECURRENT NEURALACOUSTIC WORD EMBEDDINGSWanjia HeDepartment of Computer ScienceUniversity of ChicagoChicago, IL 60637, USAwanjia@ttic.eduWeiran Wang & Karen LivescuToyota Technological Institute at ChicagoChicago, IL 60637, USAfweiranwang,klivescu g@ttic.eduABSTRACTRecent work has begun exploring neural acoustic word embeddings—fixed-dimensional vector representations of arbitrary-length speech segments corre-sponding to words. Such embeddings are applicable to speech retrieval and recog-nition tasks, where reasoning about whole words may make it possible to avoidambiguous sub-word representations. The main idea is to map acoustic sequencesto fixed-dimensional vectors such that examples of the same word are mappedto similar vectors, while different-word examples are mapped to very differentvectors. In this work we take a multi-view approach to learning acoustic wordembeddings, in which we jointly learn to embed acoustic sequences and their cor-responding character sequences. We use deep bidirectional LSTM embeddingmodels and multi-view contrastive losses. We study the effect of different lossvariants, including fixed-margin and cost-sensitive losses. Our acoustic word em-beddings improve over previous approaches for the task of word discrimination.We also present results on other tasks that are enabled by the multi-view approach,including cross-view word discrimination and word similarity.1 I NTRODUCTIONWord embeddings—continuous-valued vector representations of words—are an almost ubiquitouscomponent of recent natural language processing (NLP) research. Word embeddings can be learnedusing spectral methods (Deerwester et al., 1990) or, more commonly in recent work, via neuralnetworks (Bengio et al., 2003; Mnih & Hinton, 2007; Mikolov et al., 2013; Pennington et al.,2014). Word embeddings can also be composed to form embeddings of phrases, sentences, ordocuments (Socher et al., 2014; Kiros et al., 2015; Wieting et al., 2016; Iyyer et al., 2015).In typical NLP applications, such embeddings are intended to represent the semantics of the cor-responding words/sequences. In contrast, embeddings that represent the way a word or sequencesounds are rarely considered. In this work we address this problem, starting with embeddings of in-dividual words. Such embeddings could be useful for tasks like spoken term detection (Fiscus et al.,2007), spoken query-by-example search (Anguera et al., 2014), or even speech recognition usinga whole-word approach (Gemmeke et al., 2011; Bengio & Heigold, 2014). In tasks that involvecomparing speech segments to each other, vector embeddings can allow more efficient and more ac-curate distance computation than sequence-based approaches such as dynamic time warping (Levinet al., 2013, 2015; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016).We consider the problem of learning vector representations of acoustic sequences and orthographic(character) sequences corresponding to single words, such that the learned embeddings representthe way the word sounds. We take a multi-view approach, where we jointly learn the embeddingsfor character and acoustic sequences. We consider several contrastive losses, based on learningfrom pairs of matched acoustic-orthographic examples and randomly drawn mismatched pairs. Thelosses correspond to different goals for learning such embeddings; for example, we might want theembeddings of two waveforms to be close when they correspond to the same word and far when theycorrespond to different ones, or we might want the distances between embeddings to correspond tosome ground-truth orthographic edit distance.1Published as a conference paper at ICLR 2017One of the useful properties of this multi-view approach is that, unlike earlier work on acoustic wordembeddings, it produces both acoustic and orthographic embeddings that can be directly compared.This makes it possible to use the same learned embeddings for multiple single-view and cross-viewtasks. Our multi-view embeddings produce improved results over earlier work on acoustic worddiscrimination, as well as encouraging results on cross-view discrimination and word similarity.12 O UR APPROACHIn this section, we first introduce our approach for learning acoustic word embeddings in a multi-view setting, after briefly reviewing related approaches to put ours in context. We then discussthe particular neural network architecture we use, based on bidirectional long short-term memory(LSTM) networks (Hochreiter & Schmidhuber, 1997).2.1 M ULTI -VIEW LEARNING OF ACOUSTIC WORD EMBEDDINGSPrevious approaches have focused on learning acoustic word embeddings in a “single-view” setting.In the simplest approach, one uses supervision of the form “acoustic segment xis an instance ofthe word y”, and trains the embedding to be discriminative of the word identity. Formally, given adataset of paired acoustic segments and word labels f(xi;yi)gNi=1, this approach solves the follow-ing optimization:minf;hobjclassify :=1NNXi`(h(f(xi));yi); (1)where network fmaps an acoustic segment into a fixed-dimensional feature vector/embedding, hisa classifier that predicts the corresponding word label from the label set of the training data, and theloss`measures the discrepancy between the prediction and ground-truth word label (one can useany multi-class classification loss here, and a typical choice is the cross-entropy loss where hhas asoftmax top layer). The two networks fandhare trained jointly. Equivalently, one could considerthe composition h(f(x))as a classifier network, and use any intermediate layer’s activations as thefeatures. We refer to the objective in (1) as the “classifier network” objective, which has been used inseveral prior studies on acoustic word embeddings (Bengio & Heigold, 2014; Kamper et al., 2016;Settle & Livescu, 2016).This objective, however, is not ideal for learning acoustic word embeddings. This is because theset of possible word labels is huge, and we may not have enough instances of each label to traina good classifier. In downstream tasks, we may encounter acoustic segments of words that did notappear in the embedding training set, and it is not clear that the classifier-based embeddings willhave reasonable behavior on previously unseen words.An alternative approach, based on Siamese networks (Bromley et al., 1993), uses supervision of theform “segment x1is similar to segment x2, and is not similar to segment x3”, where two segmentsare considered similar if they have the same word label and dissimilar otherwise. Models basedon Siamese networks have been used for a variety of representation learning problems in NLP (Huet al., 2014; Wieting et al., 2016), vision (Hadsell et al., 2006), and speech (Synnaeve et al., 2014;Kamper et al., 2015) including acoustic word embeddings (Kamper et al., 2016; Settle & Livescu,2016). A typical objective in this category enforces that the distance between (x1;x3)is larger thanthe distance between (x1;x2)by some margin:minfobjsiamese :=1NNXimax0; m+disf(x1i); f(x2i)disf(x1i); f(x3i);(2)where the network fextracts the fixed-dimensional embedding, the distance function dis(;)mea-sures the distance between the two embedding vectors, and m> 0is the margin parameter. The term“Siamese” (Bromley et al., 1993; Chopra et al., 2005) refers to the fact that the triplet (x1;x2;x3)share the same embedding network f.Unlike the classification-based loss, the Siamese network loss does not enforce hard decisions onthe label of each segment. Instead it tries to learn embeddings that respect distances between word1Our tensorflow implementation is available athttps://github.com/opheadacheh/Multi-view-neural-acoustic-words-embeddings2Published as a conference paper at ICLR 2017pairs, which can be helpful for dealing with unseen words. The Siamese network approach also usesmore examples in training, as one can easily generate many more triplets than (segment, label) pairs,and it is not limited to those labels that occur a sufficient number of times in the training set.The above approaches treat the word labels as discrete classes, which ignores the similarity betweendifferent words, and does not take advantage of the more complex information contained in thecharacter sequences corresponding to word labels. The orthography naturally reflects some aspectsof similarity between the words’ pronunciations, which should also be reflected in the acousticembeddings. One way to learn features from multiple sources of complementary information isusing a multi-view representation learning setting. We take this approach, and consider the acousticsegment and the character sequence to be two different views of the pronunciation of the word.While many deep multi-view learning objectives are applicable (Ngiam et al., 2011; Srivastava &Salakhutdinov, 2014; Sohn et al., 2014; Wang et al., 2015), we consider the multi-view contrastiveloss objective of (Hermann & Blunsom, 2014), which is simple to optimize and implement andperforms well in practice. In this algorithm, we embed acoustic segments xby a network fandcharacter label sequences cby another network ginto a common space, and use weak supervi-sion of the form “for paired segment x+and its character label sequence c+, the distance betweentheir embedding is much smaller than the distance between embeddings of x+and an unmatchedcharacter label sequence c”. Formally, we optimize the following objective with such supervision:minf;gobj0:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); g(ci); (3)where ciis a negative character label sequence of x+ito be contrasted with the positive/correctcharacter sequence c+i, andmis the margin parameter. In this paper we use the cosine distance,dis(a;b) = 1Dakak;bkbkE.2Note that in the multi-view setting, we have multiple ways of generating triplets that contain onepositive pair and one negative pair each. Below are the other three objectives we explore in thispaper:minf;gobj1:=1NNXimax0; m+disf(x+i); g(c+i)disg(c+i); g(ci); (4)minf;gobj2:=1NNXimax0; m+disf(x+i); g(c+i)disf(xi); g(c+i); (5)minf;gobj3:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); f(xi): (6)xiin (5) and (6) refers to a negative acoustic feature sequence, that is one with a different labelfrom x+i. We note that obj1andobj3contain distances between same-view embeddings, and areless thoroughly explored in the literature. We will also consider combinations of obj0through obj3.Finally, thus far we have considered losses that do not explicitly take into account the degree ofdifference between the positive and negative pairs (although the learned embeddings may implicitlylearn this through the relationship between sequences in the two views). We also consider a cost-sensitive objective designed to explicitly arrange the embedding space such that word similarity isrespected. In (3), instead of a fixed margin m, we use:m(c+;c) :=mmaxmin (tmax; editdis (c+;c))tmax; (7)wheretmax>0is a threshold for edit distances (all edit distances above tmaxare considered equallybad), andmmax is the maximum margin we impose. The margin is set to mmax if the edit distancebetween two character sequences is above tmax; otherwise it scales linearly with the edit distanceeditdis (c+;c)). We use the Levenshtein distance as the edit distance. Here we explore the cost-sensitive margin with obj0, but it could in principle be used with other objectives as well.2In experiments, we use the unit-length vectorakakas the embedding. It tends to perform better than f(x)and more directly reflects the cosine similarity. This is equivalent to adding a nonlinear normalization layer ontop of f.3Published as a conference paper at ICLR 2017LSTMcellrecurrentconnectionsinputacousticfeaturesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellstackedlayersx"xLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellf$xf%xLSTMcellLSTMcellLSTMcellLSTMcellfx=[f%xf'x]g(c)=[g%cg'c]f(x-.)g(c-.)g(c-/)outputacousticembeddingoutputcharactersembeddingLSTMcellrecurrentconnectionsinputcharactersequencesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellc"cLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellg$cg%cowrd1000001001000001Figure 1: Illustration of our embedding architecture and contrastive multi-view approach.2.2 R ECURRENT NEURAL NETWORK ARCHITECTURESince the inputs of both views have a sequential structure, we implement both fandgwith recur-rent neural networks and in particular long-short term memory networks (LSTMs). Recurrent neu-ral networks are the state-of-the-art models for a number of speech tasks including speech recogni-tion Graves et al. (2013), and LSTM-based acoustic word embeddings have produced the best resultson one of the tasks in our experiments (Settle & Livescu, 2016).As shown in Figure 1, our fandgare produced by multi-layer (stacked) bidirectional LSTMs.The inputs can be any frame-level acoustic feature representation and vector representation of thecharacters in the orthographic input. At each layer, two LSTM cells process the input sequence fromleft to right and from right to left respectively. At intermediate layers, the outputs of the two LSTMsat each time step are concatenated to form the input sequence to the next layer. At the top layer, thelast time step outputs of the two LSTMs are concatenated to form a fixed-dimensional embeddingof the view, and the embeddings are then used to calculate the cosine distances in our objectives.3 R ELATED WORKWe are aware of no prior work on multi-view learning of acoustic and character-based word embed-dings. However, acoustic word embeddings learned in other ways have recently begun to be studied.Levin et al. (2013) proposed an approach for embedding an arbitrary-length segment of speech asa fixed-dimensional vector, based on representing each word as a vector of dynamic time warping(DTW) distances to a set of template words. This approach produced improved performance on aword discrimination task compared to using raw DTW distances, and was later also applied success-fully for a query-by-example task (Levin et al., 2015). One disadvantage of this approach is that,while DTW handles the issue of variable sequence lengths, it is computationally costly and involvesa number of DTW parameters that are not learned.Kamper et al. (2016) and Settle & Livescu (2016) later improved on Levin et al. ’s word discrimi-nation results using convolutional neural networks (CNNs) and recurrent neural networks (RNNs)trained with either a classification or contrastive loss. Bengio & Heigold (2014) trained convolu-tional neural network (CNN)-based acoustic word embeddings for rescoring the outputs of a speechrecognizer, using a loss combining classification and ranking criteria. Maas et al. (2012) traineda CNN to predict a semantic word embedding from an acoustic segment, and used the resultingembeddings as features in a segmental word-level speech recognizer. Harwath and Glass Harwath& Glass (2015); Harwath et al. (2016); Harwath & Glass (2017) jointly trained CNN embeddingsof images and spoken captions, and showed that word-like unit embeddings can be extracted fromthe speech model. CNNs require normalizing the duration of the input sequences, which has typ-ically been done via padding. RNNs, on the other hand, are more flexible in dealing with verydifferent-length sequences. Chen et al. (2015) used long short-term memory (LSTM) networks witha classification loss to embed acoustic words for a simple (single-query) query-by-example searchtask. Chung et al. (2016) learned acoustic word embeddings based on recurrent neural network(RNN) autoencoders, and found that they improve over DTW for a word discrimination task similarto that of Levin et al. (2013). Audhkhasi et al. (2017) learned autoencoders for acoustic and writtenwords, as well as a model for comparing the two, and applied these to a keyword search task.4Published as a conference paper at ICLR 2017Evaluation of acoustic word embeddings in downstream tasks such as speech recognition and searchcan be costly, and can obscure details of embedding models and training approaches. Most eval-uations have been based on word discrimination – the task of determining whether two speechsegments correspond to the same word or not – which can be seen as a proxy for query-by-examplesearch (Levin et al., 2013; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016). Onedifference between word discrimination and search/recognition tasks is that in word discriminationthe word boundaries are given. However, prior work has been able to apply results from word dis-crimination Levin et al. (2013) to improve a query-by-example system without known word bound-aries Levin et al. (2015), by simply applying their embeddings to non-word segments as well.The only prior work focused on vector embeddings of character sequences explicitly aimed at repre-senting their acoustic similarity is that of Ghannay et al. (2016), who proposed evaluations based onnearest-neighbor retrieval, phonetic/orthographic similarity measures, and homophone disambigua-tion. We use related tasks here, as well as acoustic word discrimination for comparison with priorwork on acoustic embeddings.4 E XPERIMENTS AND RESULTSThe ultimate goal is to gain improvements in speech systems where word-level discrimination isneeded, such as speech recognition and query-by-example search. However, in order to focus on thecontent of the embeddings themselves and to more quickly compare a variety of models, it is desir-able to have surrogate tasks that serve as intrinsic measures of performance. Here we consider threeforms of evaluation, all based on measuring whether cosine distances between learned embeddingscorrespond well to desired properties.In the first task, acoustic word discrimination , we are given a pair of acoustic sequences andmust decide whether they correspond to the same word or to different words. This task has beenused in several prior papers on acoustic word embeddings Kamper et al. (2015, 2016); Chung et al.(2016); Settle & Livescu (2016) and is a proxy for query-by-example search. For each given spokenword pair, we calculate the cosine distance between their embeddings. If the cosine distance isbelow a threshold, we output “yes” (same word), otherwise we output “no” (different words). Theperformance measure is the average precision (AP), which is the area under the precision-recallcurve generated by varying the threshold and has a maximum value of 1.In our multi-view setup, we embed not only the acoustic words but also the character sequences.This allows us to use our embeddings also for tasks involving comparisons between written andspoken words. For example, the standard task of spoken term detection (Fiscus et al., 2007) involvessearching for examples of a given text query in spoken documents. This task is identical to query-by-example except that the query is given as text. In order to explore the potential of multi-viewembeddings for such tasks, we design another proxy task, cross-view word discrimination . Herewe are given a pair of inputs, one a written word and one an acoustic word segment, and our taskis to determine if the acoustic signal is an example of the written word. The evalution proceedsanalogously to the acoustic word discrimination task: We output “yes” if the cosine distance be-tween the embeddings of the written and spoken sequences are below some threshold, and measureperformance as the average precision (AP) over all thresholds.Finally, we also would like to obtain a more fine-grained measure of whether the learned embeddingscapture our intuitive sense of similarity between words. Being able to capture word similarity mayalso be useful in building query or recognition systems that fail gracefully and produce human-like errors. For this purpose we measure the rank correlation between embedding distances andcharacter edit distances. This is analogous to the evaluation of semantic word embeddings via therank correlation between embedding distances and human similarity judgments (Finkelstein et al.,2001; Hill et al., 2015). In our case, however, we do not use human judgments since the ground-truthedit distances themselves provide a good measure. We refer to this as the word similarity task,and we apply this measure to both pairs of acoustic embeddings and pairs of character sequenceembeddings. Similar measures have been proposed by Ghannay et al. (2016) to evaluate acousticword embeddings, although they considered only near neighbors of each word whereas we considerthe correlation across the full range of word pairs.5Published as a conference paper at ICLR 2017In the experiments described below, we first focus on the acoustic word discrimination task for pur-poses of initial exploration and hyperparameter search, and then largely fix the models for evaluationusing the cross-view word discrimination and word similarity measures.4.1 D ATAWe use the same experimental setup and data as in Kamper et al. (2015, 2016); Settle & Livescu(2016). The task and setup were first developed by (Carlin et al., 2011). The data is drawn fromthe Switchboard English conversational speech corpus (Godfrey et al., 1992). The spoken wordsegments range in duration from 50 to 200 frames (0.5 - 2 seconds). The train/dev/test splitscontain 9971/10966/11024 pairs of acoustic segments and character sequences, corresponding to1687/3918/3390 unique words. In computing the AP for the dev or test set, all pairs in the set areused, yielding approximately 60 million word pairs.The input to the embedding model in the acoustic view is a sequence of 39-dimensional vectors(one per frame) of standard mel frequency cepstral coefficients (MFCCs) and their first and secondderivatives. The input to the character sequence embedding model is a sequence of 26-dimensionalone-hot vectors indicating each character of the word’s orthography.4.2 M ODEL DETAILS AND HYPERPARAMETER TUNINGWe experiment with different neural network architectures for each view, varying the number ofstacked LSTM layers, the number of hidden units for each layer, and the use of single- or bidirec-tional LSTM cells. A coarse grid search shows that 2-layer bidirectional LSTMs with 512 hiddenunits per direction per layer perform well on the acoustic word discrimination task, and we keepthis structure fixed for subsequent experiments (see Appendix A for more details). We use the out-puts of the top-layer LSTMs as the learned embedding for each view, which is 1024-dimensional ifbidirectional LSTMs are used.In training, we use dropout on the inputs of the acoustic view and between stacked layers for bothviews. The architecture is illustrated in Figure 1. For each training example, our contrastive lossesrequire a corresponding negative example. We generate a negative character label sequence by uni-formly sampling a word label from the training set that is different from the positive label. Weperform a new negative label sampling at the beginning of each epoch. Similarly, negative acousticfeature sequences are uniformly sampled from all of the differently labeled acoustic feature se-quences in the training set.The network weights are initialized with values sampled uniformly from the range [0:05;0:05].We use the Adam optimizer (Kingma & Ba, 2015) for updating the weights using mini-batches of20 acoustic segments, with an initial learning rate tuned over f0:0001;0:001g. Dropout is used ateach layer, with the rate tuned over f0;0:2;0:4;0:5g, in which 0:4usually outperformed others.The margin in our basic contrastive objectives 0-3 is tuned over f0:3;0:4;0:5;0:6;0:7g, out ofwhich 0:4and0:5typically yield best results. For obj0with the cost-sensitive margin, we tune themaximum margin mmax overf0:5;0:6;0:7gand the threshold tmax overf9;11;13g. We traineach model for up to 1000 epochs. The model that gives the best AP on the development set is usedfor evaluation on the test set.4.3 E FFECTS OF DIFFERENT OBJECTIVESWe presented four contrastive losses (3)–(6) and potential combinations in Section 2.1. We nowexplore the effects of these different objectives on the word discrimination tasks.Table 1 shows the development set AP for acoustic and cross-view word discrimination achievedusing the various objectives. We tuned the objectives for the acoustic discrimination task, and thenused the corresponding converged models for the cross-view task. Of the simple contrastive objec-tives, obj0andobj2(which involve only cross-view distances) slightly outperform the other two onthe acoustic word discrimination task. The best-performing objective is the “symmetrized” objectiveobj0+ obj2, which significantly outperforms all individual objectives (and the combination of thefour). Finally, the cost-sensitive objective is very competitive as well, while falling slightly shortof the best performance. We note that a similar objective to our obj0+ obj2was used by Vendrovet al. (2016) for the task of caption-image retrieval, where the authors essentially use all non-paired6Published as a conference paper at ICLR 20170 200 400 600 800 1000Epochs0.00.10.20.30.40.50.60.70.8Average Precision on Devobj 0obj 2obj 0 + obj 2Figure 2: Development set AP for several objec-tives on acoustic word discrimination.Objective Dev AP Dev AP(acoustic) (cross-view)obj00.659 0.791obj10.654 0.807obj20.675 0.788obj30.640 0.782obj0+obj20.702 0.814P3i=0obji0.672 0.804cost-sensitive 0.671 0.802Table 1: Word discrimination performancewith different objectives.Method Test AP Test AP(acoustic) (cross-view)MFCCs + DTW (Kamper et al., 2016) 0.214Correspondence autoencoder + DTW (Kamper et al., 2015) 0.469Phone posteriors + DTW (Carlin et al., 2011) 0.497Siamese CNN (Kamper et al., 2016) 0.549Siamese LSTM (Settle & Livescu, 2016) 0.671Our multi-view LSTM obj0+ obj20.806 0.892Table 2: Final test set AP for different word discrimination approaches. The first line is a baselineusing no word embeddings, but rather applying dynamic time warping (DTW) to the input MFCCfeatures. The second and third lines are prior results using no word embeddings (but rather usingDTW with learned correspondence autoencoder-based or phone posterior features, trained on largerexternal (in-domain) data). The remaining prior work corresponds to using cosine similarity betweenacoustic word embeddings.examples from the other view in the minibatch as negative examples (instead of random samplingone negative example as we do) to be contrasted with one paired example.Figure 2 shows the progression of the development set AP for acoustic word discrimination over1000 training epochs, using several of the objectives, where AP is evaluated every 5epochs. Weobserve that even after 1000 epochs, the development set AP has not quite saturated, indicating thatit may be possible to further improve performance.Overall, our best-performing objective is the combined obj0+obj2, and we use it for reporting finaltest-set results. Table 2 shows the test set AP for both the acoustic and cross-view tasks using ourfinal model (“multi-view LSTM”). For comparison, we also include acoustic word discriminationresults reported previously by Kamper et al. (2016); Settle & Livescu (2016). Previous approacheshave not addressed the problem of learning embeddings jointly with the text view, so they can notbe evaluated on the cross-view task.4.4 W ORD SIMILARITY TASKSTable 3 gives our results on the word similarity tasks, that is the rank correlation (Spearman’s ) be-tween embedding distances and orthographic edit distance (Levenshtein distance between charactersequences). We measure this correlation for both our acoustic word embeddings and for our textembeddings. In the case of the text embeddings, we could of course directly measure the Leven-shtein distance between the inputs; here we are simply measuring how much of this information thetext embeddings are able to retain.7Published as a conference paper at ICLR 2017Objective (acoustic embedding) (text embedding)fixed-margin ( obj0) 0.179 0.207cost-sensitive margin ( obj0) 0.240 0.270Table 3: Word similarity results using fixed-margin and cost-sensitive objectives, given as rankcorrelation (Spearman’s ) between embedding distances and orthographic edit distances.Interestingly, while the cost-sensitive objective did not produce substantial gains on the word dis-crimination tasks above, it does greatly improve the performance on this word similarity measure.This is a satisfying observation, since the cost-sensitive loss is trying to improve precisely this rela-tionship between distances in the embedding space and the orthographic edit distance.Although we have trained our embeddings using orthographic labels, it is also interesting to con-sider how closely aligned the embeddings are with the corresponding phonetic pronunciations. Forcomparison, the rank correlation between our acoustic embeddings and phonetic edit distances is0:226, and for our text embeddings it is 0:241, which are relatively close to the rank correlationswith orthographic edit distance. A future direction is to directly train embeddings with phoneticsequence supervision rather than orthography; this setting involves somewhat stronger supervision,but it is easy to obtain in many cases.Another interesting point is that the performance is not a great deal better for the text embeddingsthan for the acoustic embeddings, even though the text embeddings have at their disposal the textinput itself. We believe this has to do with the distribution of words in our data: While the dataincludes a large variety of words, it does not include many very similar pairs. In fact, of all pos-sible pairs of unique training set words, fewer than 2% have an edit distance below 5 characters.Therefore, there may not be sufficient information to learn to distinguish detailed differences amongcharacter sequences, and the cost-sensitive loss ultimately does not learn much more than to separatedifferent words. In future work it would be interesting to experiment with data sets that have a largervariety of similar words.4.5 V ISUALIZATION OF LEARNED EMBEDDINGSFigure 3 gives a 2-dimensional t-SNE (van der Maaten & Hinton, 2008) visualization of selectedacoustic and character sequences from the development set, including some that were seen in thetraining set and some previously unseen words. The previously seen words in this figure wereselected uniformly at random among those that appear at least 15 times in the development set(the unseen words are the only six that appear at least 15 times in the development set). Thisvisualization demonstrates that the acoustic embeddings cluster very tightly and are very close tothe text embeddings, and that unseen words cluster nearly as well as previously seen ones.While Figure 3 shows the relationship among the multiple acoustic embeddings and the text em-beddings, the words are all very different so we cannot draw conclusions about the relationshipsbetween words. Figure 4 provides another visualization, this time exploring the relationship amongthe text embeddings of a number of closely related words, namely all development set words end-ing in “-ly”, “-ing”, and “-tion”. This visualization confirms that related words are embedded closetogether, with the words sharing a suffix forming fairly well-defined clusters.5 C ONCLUSIONWe have presented an approach for jointly learning acoustic word embeddings and their orthographiccounterparts. This multi-view approach produces improved acoustic word embedding performanceover previous approaches, and also has the benefit that the same embeddings can be applied for bothspoken and written query tasks. We have explored a variety of contrastive objectives: ones with afixed margin that aim to separate same and different word pairs, as well as a cost-sensitive loss thataims to capture orthographic edit distances. While the losses generally perform similarly for worddiscrimination tasks, the cost-sensitive loss improves the correlation between embedding distancesand orthographic distances. One interesting direction for future work is to directly use knowledgeabout phonetic pronunciations, in both evaluation and training. Another direction is to extend ourapproach to directly train on both word and non-word segments.8Published as a conference paper at ICLR 2017−25 −20 −15 −10 −5 0 5 10 15−25−20−15−10−505101520somethingbusinessprogramdecidedgoodness serviceCAMPING RESTAURANTSCOLORADOATMOSPHERERANGERSMOUNTAINSFigure 3: Visualization via t-SNE of acoustic word embeddings (colored markers) and correspond-ing character sequence embeddings (text), for a set of development set words with at least 15 acoustictokens. Words seen in training are in lower-case; unseen words are in upper-case.−4 −2 0 2 4 6 810 12 14−15−10−5051015somethingapparentlyexactlyaccidentallyliterallyinterestingprobablydirectlytraditionpersonallyeverythingkiddingeventuallyquicklycombinationironicallybasicallypinpointingtuitionnovelizationeducationstimulatingwaitinggraduationconstructionsubscriptionincrediblyceilingpoisoningcollectionperfectlyimmediatelyexasperatingunravelingproperlyprobationoccasionallyprotectionhopefullysentencingpopulationspeculationbowlingtramplingintersectionspecificallydepressingdiminishingobviouslythrivingdeductioncommunicationideallyqualificationammunitionsatisfactionexposinglightlypositionrepresentationnaturallyrenumerationexhibitiongenerallypoliticallyrelativelythoroughlynominationdemonstrationinsulationessentiallyapplicationslidingfoundationdifferentlyspendingconfidentlyinterventionrememberingparkingdistantlydraftingrebuildingreputationstencilingincludingconventionrecruitingpurposelyweddingnonfictionadministeringshining consolidationpaddlingdeliberatelyassumptiondisturbingdebunkingexaggerationfinanciallyprotestingfallinguproariouslydiscriminationconcentrationoppositionunfairlyleisurelyevidentlysittingselectionholdingassassinationsanitationultimatelytestingreceptioncompensationaboundingpassingcommercializationfrighteningoutrageouslyrapidlyexplanationhistoricallydefendingemphaticallybarkingappealingconsequentlyreliablylettingbroadcastingadditioncompetingtouchy-feelysettingregulationlegislationattractionfaithfully interchangeablylecturingpreviouslyvacationingmediationoffensivelyinterestinglyFigure 4: Visualization via t-SNE of character sequence embeddings for words with the suffixes“-ly” (blue), “-ing” (red), and “-tion” (green).ACKNOWLEDGMENTSThis research was supported by a Google Faculty Award and by NSF grant IIS-1321015. Theopinions expressed in this work are those of the authors and do not necessarily reflect the views ofthe funding agency. This research used GPUs donated by NVIDIA Corporation. We thank HermanKamper and Shane Settle for their assistance with the data and experimental setup.9Published as a conference paper at ICLR 2017REFERENCESXavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Sz ̈oke, Andi Buzo, and Florian Metze. Queryby example search on speech at mediaeval 2014. In MediaEval , 2014.Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kings-bury. End-to-end ASR-free keyword search from speech. arXiv preprint arXiv:1701.04313 , 2017.Samy Bengio and Georg Heigold. Word embeddings for speech recognition. In IEEE Int. Conf.Acoustics, Speech and Sig. Proc. , 2014.Yoshua Bengio, R ́ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisticlanguage model. Journal of Machine Learing Research , 3(Feb):1137–1155, 2003.Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard S ̈ackinger, and Roopak Shah. Signature verifi-cation using a siamese time delay neural network. In Advances in Neural Information ProcessingSystems (NIPS) , pp. 737–744, 1993.Michael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. Rapid evaluation of speechrepresentations for spoken term discovery. In Proc. Interspeech , 2011.Guoguo Chen, Carolina Parada, and Tara N Sainath. Query-by-example keyword spotting usinglong short-term memory networks. In Proc. ICASSP , 2015.Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, withapplication to face verification. In IEEE Computer Society Conf. Computer Vision and PatternRecognition , pp. 539–546, 2005.Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, and Hung-Yi Lee. Unsupervised learning of audiosegment representations using sequence-to-sequence recurrent neural networks. In Proc. Inter-speech , 2016.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science , 41(6):391, 1990.Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, andEytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10thinternational conference on World Wide Web , 2001.Jonathan G Fiscus, Jerome Ajot, John S Garofolo, and George Doddingtion. Results of the 2006spoken term detection evaluation. In Proc. SIGIR , volume 7, pp. 51–57. Citeseer, 2007.Jort F Gemmeke, Tuomas Virtanen, and Antti Hurmalainen. Exemplar-based sparse representationsfor noise robust automatic speech recognition. IEEE Transactions on Acoustics, Speech, andLanguage Processing , 19(7):2067–2080, 2011.Sahar Ghannay, Yannick Esteve, Nathalie Camelin, and Paul Deleglise. Evaluation of acoustic wordembeddings. In Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP , 2016.John J Godfrey, Edward C Holliman, and Jane McDaniel. SWITCHBOARD: Telephone speechcorpus for research and development. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 1992.Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2013.Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariantmapping. In IEEE Computer Society Conf. Computer Vision and Pattern Recognition , 2006.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. InProc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) , 2015.David Harwath and James R Glass. Learning word-like units from joint audio-visual analysis. arXivpreprint arXiv:1701.07481 , 2017.10Published as a conference paper at ICLR 2017David Harwath, Antonio Torralba, and James Glass. Unsupervised learning of spoken language withvisual context. In Advances in Neural Information Processing Systems (NIPS) , 2016.Karl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without wordalignment. In Int. Conf. Learning Representations , 2014. arXiv:1312.6173 [cs.CL].Felix Hill, Roi Reichart, and Anna Korhonen. SimLex-999: Evaluating semantic models with (gen-uine) similarity estimation. Computational Linguistics , 41(4), 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architecturesfor matching natural language sentences. In Advances in Neural Information Processing Systems(NIPS) , 2014.Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum ́e III. Deep unordered com-position rivals syntactic methods for text classification. In Proc. Association for ComputationalLinguistics , 2015.Herman Kamper, Micah Elsner, Aren Jansen, and Sharon J. Goldwater. Unsupervised neural net-work based feature extraction using weak top-down constraints. In IEEE Int. Conf. Acoustics,Speech and Sig. Proc. , 2015.Herman Kamper, Weiran Wang, and Karen Livescu. Deep convolutional acoustic word embeddingsusing word-pair side information. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2016.Diederik Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In Int. Conf.Learning Representations , 2015.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information ProcessingSystems (NIPS) , 2015.Keith Levin, Katharine Henry, Aren Jansen, and Karen Livescu. Fixed-dimensional acoustic embed-dings of variable-length segments in low-resource settings. In Proc. IEEE Workshop on AutomaticSpeech Recognition and Understanding (ASRU) , 2013.Keith Levin, Aren Jansen, and Benjamin Van Durme. Segmental acoustic indexing for zero resourcekeyword search. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2015.Andrew L Maas, Stephen D Miller, Tyler M O’neil, Andrew Y Ng, and Patrick Nguyen. Word-levelacoustic modeling with convolutional vector regression. In Proc. ICML Workshop on Represen-tation Learning , 2012.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013.Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling.InICML , 2007.Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. Multimodaldeep learning. In ICML , pp. 689–696, 2011.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for wordrepresentation. In Proc. Conference on Empirical Methods in Natural Language Processing ,2014.Shane Settle and Karen Livescu. Discriminative acoustic word embeddings: Recurrent neuralnetwork-based approaches. In Proc. IEEE Workshop on Spoken Language Technology (SLT) ,2016.11Published as a conference paper at ICLR 2017Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng.Grounded compositional semantics for finding and describing images with sentences. Trans-actions of the Association for Computational Linguistics , 2:207–218, 2014.Kihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variationof information. In Advances in Neural Information Processing Systems (NIPS) , pp. 2141–2149,2014.Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines.Journal of Machine Learing Research , pp. 2949–2980, 2014.Gabriel Synnaeve, Thomas Schatz, and Emmanuel Dupoux. Phonetics embedding learning withside information. In Proc. IEEE Workshop on Spoken Language Technology (SLT) , 2014.Laurens J. P. van der Maaten and Geoffrey E. Hinton. Visualizing data using t-SNE. Journal ofMachine Learing Research , 9:2579–2605, November 2008.Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images andlanguage. In Int. Conf. Learning Representations , 2016.Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representationlearning. In ICML , pp. 1083–1092, 2015.John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrasticsentence embeddings. In Int. Conf. Learning Representations , 2016.12Published as a conference paper at ICLR 2017A A DDITIONAL ANALYSISWe first explore the effect of network architectures for our embedding models. We learn embeddingsusing objective obj0and evaluate them on the acoustic and cross-view word discrimination tasks.The resulting average precisions on the development set are given in Table 4. All of the modelswere trained for 1000 epochs, except for the 1-layer unidirectional models which converged after500 epochs. It is clear that bidirectional LSTMs are more successful than unidirectional LSTMsfor these tasks, and two layers of LSTMs are much better than a single layer of LSTMs. We didnot observe significant further improvement by using more than two layers of LSTMs. For all otherexperiments, we fix the architecture to 2-layer bidirectional LSTMs for each view.Architecture Dev AP Dev AP(acoustic word discrimination) (cross-view word discrimination)1-layer unidirectional 0.379 0.6161-layer bidirectional 0.466 0.6902-layer bidirectional 0.659 0.791Table 4: Average precision (AP) for acoustic and cross-view word discrimination tasks on the de-velopment set, using embeddings learned with objective obj0and different LSTM architectures.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.20.40.60.81.0Precision−5 0 5 10 15 20Orthographic edit distances−0.20.00.20.40.60.81.01.21.4Embeddings cosine distancesFigure 5: Precision-recall curve (left: two-layer bidirectional LSTM trained with obj0+ obj2forword discrimination task) and scatter plot of embedding distances vs. orthographic distances (right:cost-sensitive margin model for word similarity task), for our best embedding models.In Figure 5 we also give the precision-recall curve for our best models, as well as the scatter plot ofcosine distances between acoustic embeddings vs. orthographic edit distances.13
HydYVVQVx
rJxDkvqee
ICLR.cc/2017/conference/-/paper347/official/review
{"title": "well-done domain adaptation", "rating": "6: Marginally above acceptance threshold", "review": "this proposes a multi-view learning approach for learning representations for acoustic sequences. they investigate the use of bidirectional LSTM with contrastive losses. experiments show improvement over the previous work.\n\nalthough I have no expertise in speech processing, I am in favor of accepting this paper because of following contributions:\n- investigating the use of fairly known architecture on a new domain.\n- providing novel objectives specific to the domain\n- setting up new benchmarks designed for evaluating multi-view models\n\nI hope authors open-source their implementation so that people can replicate results, compare their work, and improve on this work.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-view Recurrent Neural Acoustic Word Embeddings
["Wanjia He", "Weiran Wang", "Karen Livescu"]
Recent work has begun exploring neural acoustic word embeddings–fixed dimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.
["acoustic sequences", "examples", "acoustic word embeddings", "word discrimination", "neural acoustic word", "dimensional vector representations", "speech segments", "words"]
https://openreview.net/forum?id=rJxDkvqee
https://openreview.net/pdf?id=rJxDkvqee
https://openreview.net/forum?id=rJxDkvqee&noteId=HydYVVQVx
Published as a conference paper at ICLR 2017MULTI-VIEW RECURRENT NEURALACOUSTIC WORD EMBEDDINGSWanjia HeDepartment of Computer ScienceUniversity of ChicagoChicago, IL 60637, USAwanjia@ttic.eduWeiran Wang & Karen LivescuToyota Technological Institute at ChicagoChicago, IL 60637, USAfweiranwang,klivescu g@ttic.eduABSTRACTRecent work has begun exploring neural acoustic word embeddings—fixed-dimensional vector representations of arbitrary-length speech segments corre-sponding to words. Such embeddings are applicable to speech retrieval and recog-nition tasks, where reasoning about whole words may make it possible to avoidambiguous sub-word representations. The main idea is to map acoustic sequencesto fixed-dimensional vectors such that examples of the same word are mappedto similar vectors, while different-word examples are mapped to very differentvectors. In this work we take a multi-view approach to learning acoustic wordembeddings, in which we jointly learn to embed acoustic sequences and their cor-responding character sequences. We use deep bidirectional LSTM embeddingmodels and multi-view contrastive losses. We study the effect of different lossvariants, including fixed-margin and cost-sensitive losses. Our acoustic word em-beddings improve over previous approaches for the task of word discrimination.We also present results on other tasks that are enabled by the multi-view approach,including cross-view word discrimination and word similarity.1 I NTRODUCTIONWord embeddings—continuous-valued vector representations of words—are an almost ubiquitouscomponent of recent natural language processing (NLP) research. Word embeddings can be learnedusing spectral methods (Deerwester et al., 1990) or, more commonly in recent work, via neuralnetworks (Bengio et al., 2003; Mnih & Hinton, 2007; Mikolov et al., 2013; Pennington et al.,2014). Word embeddings can also be composed to form embeddings of phrases, sentences, ordocuments (Socher et al., 2014; Kiros et al., 2015; Wieting et al., 2016; Iyyer et al., 2015).In typical NLP applications, such embeddings are intended to represent the semantics of the cor-responding words/sequences. In contrast, embeddings that represent the way a word or sequencesounds are rarely considered. In this work we address this problem, starting with embeddings of in-dividual words. Such embeddings could be useful for tasks like spoken term detection (Fiscus et al.,2007), spoken query-by-example search (Anguera et al., 2014), or even speech recognition usinga whole-word approach (Gemmeke et al., 2011; Bengio & Heigold, 2014). In tasks that involvecomparing speech segments to each other, vector embeddings can allow more efficient and more ac-curate distance computation than sequence-based approaches such as dynamic time warping (Levinet al., 2013, 2015; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016).We consider the problem of learning vector representations of acoustic sequences and orthographic(character) sequences corresponding to single words, such that the learned embeddings representthe way the word sounds. We take a multi-view approach, where we jointly learn the embeddingsfor character and acoustic sequences. We consider several contrastive losses, based on learningfrom pairs of matched acoustic-orthographic examples and randomly drawn mismatched pairs. Thelosses correspond to different goals for learning such embeddings; for example, we might want theembeddings of two waveforms to be close when they correspond to the same word and far when theycorrespond to different ones, or we might want the distances between embeddings to correspond tosome ground-truth orthographic edit distance.1Published as a conference paper at ICLR 2017One of the useful properties of this multi-view approach is that, unlike earlier work on acoustic wordembeddings, it produces both acoustic and orthographic embeddings that can be directly compared.This makes it possible to use the same learned embeddings for multiple single-view and cross-viewtasks. Our multi-view embeddings produce improved results over earlier work on acoustic worddiscrimination, as well as encouraging results on cross-view discrimination and word similarity.12 O UR APPROACHIn this section, we first introduce our approach for learning acoustic word embeddings in a multi-view setting, after briefly reviewing related approaches to put ours in context. We then discussthe particular neural network architecture we use, based on bidirectional long short-term memory(LSTM) networks (Hochreiter & Schmidhuber, 1997).2.1 M ULTI -VIEW LEARNING OF ACOUSTIC WORD EMBEDDINGSPrevious approaches have focused on learning acoustic word embeddings in a “single-view” setting.In the simplest approach, one uses supervision of the form “acoustic segment xis an instance ofthe word y”, and trains the embedding to be discriminative of the word identity. Formally, given adataset of paired acoustic segments and word labels f(xi;yi)gNi=1, this approach solves the follow-ing optimization:minf;hobjclassify :=1NNXi`(h(f(xi));yi); (1)where network fmaps an acoustic segment into a fixed-dimensional feature vector/embedding, hisa classifier that predicts the corresponding word label from the label set of the training data, and theloss`measures the discrepancy between the prediction and ground-truth word label (one can useany multi-class classification loss here, and a typical choice is the cross-entropy loss where hhas asoftmax top layer). The two networks fandhare trained jointly. Equivalently, one could considerthe composition h(f(x))as a classifier network, and use any intermediate layer’s activations as thefeatures. We refer to the objective in (1) as the “classifier network” objective, which has been used inseveral prior studies on acoustic word embeddings (Bengio & Heigold, 2014; Kamper et al., 2016;Settle & Livescu, 2016).This objective, however, is not ideal for learning acoustic word embeddings. This is because theset of possible word labels is huge, and we may not have enough instances of each label to traina good classifier. In downstream tasks, we may encounter acoustic segments of words that did notappear in the embedding training set, and it is not clear that the classifier-based embeddings willhave reasonable behavior on previously unseen words.An alternative approach, based on Siamese networks (Bromley et al., 1993), uses supervision of theform “segment x1is similar to segment x2, and is not similar to segment x3”, where two segmentsare considered similar if they have the same word label and dissimilar otherwise. Models basedon Siamese networks have been used for a variety of representation learning problems in NLP (Huet al., 2014; Wieting et al., 2016), vision (Hadsell et al., 2006), and speech (Synnaeve et al., 2014;Kamper et al., 2015) including acoustic word embeddings (Kamper et al., 2016; Settle & Livescu,2016). A typical objective in this category enforces that the distance between (x1;x3)is larger thanthe distance between (x1;x2)by some margin:minfobjsiamese :=1NNXimax0; m+disf(x1i); f(x2i)disf(x1i); f(x3i);(2)where the network fextracts the fixed-dimensional embedding, the distance function dis(;)mea-sures the distance between the two embedding vectors, and m> 0is the margin parameter. The term“Siamese” (Bromley et al., 1993; Chopra et al., 2005) refers to the fact that the triplet (x1;x2;x3)share the same embedding network f.Unlike the classification-based loss, the Siamese network loss does not enforce hard decisions onthe label of each segment. Instead it tries to learn embeddings that respect distances between word1Our tensorflow implementation is available athttps://github.com/opheadacheh/Multi-view-neural-acoustic-words-embeddings2Published as a conference paper at ICLR 2017pairs, which can be helpful for dealing with unseen words. The Siamese network approach also usesmore examples in training, as one can easily generate many more triplets than (segment, label) pairs,and it is not limited to those labels that occur a sufficient number of times in the training set.The above approaches treat the word labels as discrete classes, which ignores the similarity betweendifferent words, and does not take advantage of the more complex information contained in thecharacter sequences corresponding to word labels. The orthography naturally reflects some aspectsof similarity between the words’ pronunciations, which should also be reflected in the acousticembeddings. One way to learn features from multiple sources of complementary information isusing a multi-view representation learning setting. We take this approach, and consider the acousticsegment and the character sequence to be two different views of the pronunciation of the word.While many deep multi-view learning objectives are applicable (Ngiam et al., 2011; Srivastava &Salakhutdinov, 2014; Sohn et al., 2014; Wang et al., 2015), we consider the multi-view contrastiveloss objective of (Hermann & Blunsom, 2014), which is simple to optimize and implement andperforms well in practice. In this algorithm, we embed acoustic segments xby a network fandcharacter label sequences cby another network ginto a common space, and use weak supervi-sion of the form “for paired segment x+and its character label sequence c+, the distance betweentheir embedding is much smaller than the distance between embeddings of x+and an unmatchedcharacter label sequence c”. Formally, we optimize the following objective with such supervision:minf;gobj0:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); g(ci); (3)where ciis a negative character label sequence of x+ito be contrasted with the positive/correctcharacter sequence c+i, andmis the margin parameter. In this paper we use the cosine distance,dis(a;b) = 1Dakak;bkbkE.2Note that in the multi-view setting, we have multiple ways of generating triplets that contain onepositive pair and one negative pair each. Below are the other three objectives we explore in thispaper:minf;gobj1:=1NNXimax0; m+disf(x+i); g(c+i)disg(c+i); g(ci); (4)minf;gobj2:=1NNXimax0; m+disf(x+i); g(c+i)disf(xi); g(c+i); (5)minf;gobj3:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); f(xi): (6)xiin (5) and (6) refers to a negative acoustic feature sequence, that is one with a different labelfrom x+i. We note that obj1andobj3contain distances between same-view embeddings, and areless thoroughly explored in the literature. We will also consider combinations of obj0through obj3.Finally, thus far we have considered losses that do not explicitly take into account the degree ofdifference between the positive and negative pairs (although the learned embeddings may implicitlylearn this through the relationship between sequences in the two views). We also consider a cost-sensitive objective designed to explicitly arrange the embedding space such that word similarity isrespected. In (3), instead of a fixed margin m, we use:m(c+;c) :=mmaxmin (tmax; editdis (c+;c))tmax; (7)wheretmax>0is a threshold for edit distances (all edit distances above tmaxare considered equallybad), andmmax is the maximum margin we impose. The margin is set to mmax if the edit distancebetween two character sequences is above tmax; otherwise it scales linearly with the edit distanceeditdis (c+;c)). We use the Levenshtein distance as the edit distance. Here we explore the cost-sensitive margin with obj0, but it could in principle be used with other objectives as well.2In experiments, we use the unit-length vectorakakas the embedding. It tends to perform better than f(x)and more directly reflects the cosine similarity. This is equivalent to adding a nonlinear normalization layer ontop of f.3Published as a conference paper at ICLR 2017LSTMcellrecurrentconnectionsinputacousticfeaturesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellstackedlayersx"xLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellf$xf%xLSTMcellLSTMcellLSTMcellLSTMcellfx=[f%xf'x]g(c)=[g%cg'c]f(x-.)g(c-.)g(c-/)outputacousticembeddingoutputcharactersembeddingLSTMcellrecurrentconnectionsinputcharactersequencesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellc"cLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellg$cg%cowrd1000001001000001Figure 1: Illustration of our embedding architecture and contrastive multi-view approach.2.2 R ECURRENT NEURAL NETWORK ARCHITECTURESince the inputs of both views have a sequential structure, we implement both fandgwith recur-rent neural networks and in particular long-short term memory networks (LSTMs). Recurrent neu-ral networks are the state-of-the-art models for a number of speech tasks including speech recogni-tion Graves et al. (2013), and LSTM-based acoustic word embeddings have produced the best resultson one of the tasks in our experiments (Settle & Livescu, 2016).As shown in Figure 1, our fandgare produced by multi-layer (stacked) bidirectional LSTMs.The inputs can be any frame-level acoustic feature representation and vector representation of thecharacters in the orthographic input. At each layer, two LSTM cells process the input sequence fromleft to right and from right to left respectively. At intermediate layers, the outputs of the two LSTMsat each time step are concatenated to form the input sequence to the next layer. At the top layer, thelast time step outputs of the two LSTMs are concatenated to form a fixed-dimensional embeddingof the view, and the embeddings are then used to calculate the cosine distances in our objectives.3 R ELATED WORKWe are aware of no prior work on multi-view learning of acoustic and character-based word embed-dings. However, acoustic word embeddings learned in other ways have recently begun to be studied.Levin et al. (2013) proposed an approach for embedding an arbitrary-length segment of speech asa fixed-dimensional vector, based on representing each word as a vector of dynamic time warping(DTW) distances to a set of template words. This approach produced improved performance on aword discrimination task compared to using raw DTW distances, and was later also applied success-fully for a query-by-example task (Levin et al., 2015). One disadvantage of this approach is that,while DTW handles the issue of variable sequence lengths, it is computationally costly and involvesa number of DTW parameters that are not learned.Kamper et al. (2016) and Settle & Livescu (2016) later improved on Levin et al. ’s word discrimi-nation results using convolutional neural networks (CNNs) and recurrent neural networks (RNNs)trained with either a classification or contrastive loss. Bengio & Heigold (2014) trained convolu-tional neural network (CNN)-based acoustic word embeddings for rescoring the outputs of a speechrecognizer, using a loss combining classification and ranking criteria. Maas et al. (2012) traineda CNN to predict a semantic word embedding from an acoustic segment, and used the resultingembeddings as features in a segmental word-level speech recognizer. Harwath and Glass Harwath& Glass (2015); Harwath et al. (2016); Harwath & Glass (2017) jointly trained CNN embeddingsof images and spoken captions, and showed that word-like unit embeddings can be extracted fromthe speech model. CNNs require normalizing the duration of the input sequences, which has typ-ically been done via padding. RNNs, on the other hand, are more flexible in dealing with verydifferent-length sequences. Chen et al. (2015) used long short-term memory (LSTM) networks witha classification loss to embed acoustic words for a simple (single-query) query-by-example searchtask. Chung et al. (2016) learned acoustic word embeddings based on recurrent neural network(RNN) autoencoders, and found that they improve over DTW for a word discrimination task similarto that of Levin et al. (2013). Audhkhasi et al. (2017) learned autoencoders for acoustic and writtenwords, as well as a model for comparing the two, and applied these to a keyword search task.4Published as a conference paper at ICLR 2017Evaluation of acoustic word embeddings in downstream tasks such as speech recognition and searchcan be costly, and can obscure details of embedding models and training approaches. Most eval-uations have been based on word discrimination – the task of determining whether two speechsegments correspond to the same word or not – which can be seen as a proxy for query-by-examplesearch (Levin et al., 2013; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016). Onedifference between word discrimination and search/recognition tasks is that in word discriminationthe word boundaries are given. However, prior work has been able to apply results from word dis-crimination Levin et al. (2013) to improve a query-by-example system without known word bound-aries Levin et al. (2015), by simply applying their embeddings to non-word segments as well.The only prior work focused on vector embeddings of character sequences explicitly aimed at repre-senting their acoustic similarity is that of Ghannay et al. (2016), who proposed evaluations based onnearest-neighbor retrieval, phonetic/orthographic similarity measures, and homophone disambigua-tion. We use related tasks here, as well as acoustic word discrimination for comparison with priorwork on acoustic embeddings.4 E XPERIMENTS AND RESULTSThe ultimate goal is to gain improvements in speech systems where word-level discrimination isneeded, such as speech recognition and query-by-example search. However, in order to focus on thecontent of the embeddings themselves and to more quickly compare a variety of models, it is desir-able to have surrogate tasks that serve as intrinsic measures of performance. Here we consider threeforms of evaluation, all based on measuring whether cosine distances between learned embeddingscorrespond well to desired properties.In the first task, acoustic word discrimination , we are given a pair of acoustic sequences andmust decide whether they correspond to the same word or to different words. This task has beenused in several prior papers on acoustic word embeddings Kamper et al. (2015, 2016); Chung et al.(2016); Settle & Livescu (2016) and is a proxy for query-by-example search. For each given spokenword pair, we calculate the cosine distance between their embeddings. If the cosine distance isbelow a threshold, we output “yes” (same word), otherwise we output “no” (different words). Theperformance measure is the average precision (AP), which is the area under the precision-recallcurve generated by varying the threshold and has a maximum value of 1.In our multi-view setup, we embed not only the acoustic words but also the character sequences.This allows us to use our embeddings also for tasks involving comparisons between written andspoken words. For example, the standard task of spoken term detection (Fiscus et al., 2007) involvessearching for examples of a given text query in spoken documents. This task is identical to query-by-example except that the query is given as text. In order to explore the potential of multi-viewembeddings for such tasks, we design another proxy task, cross-view word discrimination . Herewe are given a pair of inputs, one a written word and one an acoustic word segment, and our taskis to determine if the acoustic signal is an example of the written word. The evalution proceedsanalogously to the acoustic word discrimination task: We output “yes” if the cosine distance be-tween the embeddings of the written and spoken sequences are below some threshold, and measureperformance as the average precision (AP) over all thresholds.Finally, we also would like to obtain a more fine-grained measure of whether the learned embeddingscapture our intuitive sense of similarity between words. Being able to capture word similarity mayalso be useful in building query or recognition systems that fail gracefully and produce human-like errors. For this purpose we measure the rank correlation between embedding distances andcharacter edit distances. This is analogous to the evaluation of semantic word embeddings via therank correlation between embedding distances and human similarity judgments (Finkelstein et al.,2001; Hill et al., 2015). In our case, however, we do not use human judgments since the ground-truthedit distances themselves provide a good measure. We refer to this as the word similarity task,and we apply this measure to both pairs of acoustic embeddings and pairs of character sequenceembeddings. Similar measures have been proposed by Ghannay et al. (2016) to evaluate acousticword embeddings, although they considered only near neighbors of each word whereas we considerthe correlation across the full range of word pairs.5Published as a conference paper at ICLR 2017In the experiments described below, we first focus on the acoustic word discrimination task for pur-poses of initial exploration and hyperparameter search, and then largely fix the models for evaluationusing the cross-view word discrimination and word similarity measures.4.1 D ATAWe use the same experimental setup and data as in Kamper et al. (2015, 2016); Settle & Livescu(2016). The task and setup were first developed by (Carlin et al., 2011). The data is drawn fromthe Switchboard English conversational speech corpus (Godfrey et al., 1992). The spoken wordsegments range in duration from 50 to 200 frames (0.5 - 2 seconds). The train/dev/test splitscontain 9971/10966/11024 pairs of acoustic segments and character sequences, corresponding to1687/3918/3390 unique words. In computing the AP for the dev or test set, all pairs in the set areused, yielding approximately 60 million word pairs.The input to the embedding model in the acoustic view is a sequence of 39-dimensional vectors(one per frame) of standard mel frequency cepstral coefficients (MFCCs) and their first and secondderivatives. The input to the character sequence embedding model is a sequence of 26-dimensionalone-hot vectors indicating each character of the word’s orthography.4.2 M ODEL DETAILS AND HYPERPARAMETER TUNINGWe experiment with different neural network architectures for each view, varying the number ofstacked LSTM layers, the number of hidden units for each layer, and the use of single- or bidirec-tional LSTM cells. A coarse grid search shows that 2-layer bidirectional LSTMs with 512 hiddenunits per direction per layer perform well on the acoustic word discrimination task, and we keepthis structure fixed for subsequent experiments (see Appendix A for more details). We use the out-puts of the top-layer LSTMs as the learned embedding for each view, which is 1024-dimensional ifbidirectional LSTMs are used.In training, we use dropout on the inputs of the acoustic view and between stacked layers for bothviews. The architecture is illustrated in Figure 1. For each training example, our contrastive lossesrequire a corresponding negative example. We generate a negative character label sequence by uni-formly sampling a word label from the training set that is different from the positive label. Weperform a new negative label sampling at the beginning of each epoch. Similarly, negative acousticfeature sequences are uniformly sampled from all of the differently labeled acoustic feature se-quences in the training set.The network weights are initialized with values sampled uniformly from the range [0:05;0:05].We use the Adam optimizer (Kingma & Ba, 2015) for updating the weights using mini-batches of20 acoustic segments, with an initial learning rate tuned over f0:0001;0:001g. Dropout is used ateach layer, with the rate tuned over f0;0:2;0:4;0:5g, in which 0:4usually outperformed others.The margin in our basic contrastive objectives 0-3 is tuned over f0:3;0:4;0:5;0:6;0:7g, out ofwhich 0:4and0:5typically yield best results. For obj0with the cost-sensitive margin, we tune themaximum margin mmax overf0:5;0:6;0:7gand the threshold tmax overf9;11;13g. We traineach model for up to 1000 epochs. The model that gives the best AP on the development set is usedfor evaluation on the test set.4.3 E FFECTS OF DIFFERENT OBJECTIVESWe presented four contrastive losses (3)–(6) and potential combinations in Section 2.1. We nowexplore the effects of these different objectives on the word discrimination tasks.Table 1 shows the development set AP for acoustic and cross-view word discrimination achievedusing the various objectives. We tuned the objectives for the acoustic discrimination task, and thenused the corresponding converged models for the cross-view task. Of the simple contrastive objec-tives, obj0andobj2(which involve only cross-view distances) slightly outperform the other two onthe acoustic word discrimination task. The best-performing objective is the “symmetrized” objectiveobj0+ obj2, which significantly outperforms all individual objectives (and the combination of thefour). Finally, the cost-sensitive objective is very competitive as well, while falling slightly shortof the best performance. We note that a similar objective to our obj0+ obj2was used by Vendrovet al. (2016) for the task of caption-image retrieval, where the authors essentially use all non-paired6Published as a conference paper at ICLR 20170 200 400 600 800 1000Epochs0.00.10.20.30.40.50.60.70.8Average Precision on Devobj 0obj 2obj 0 + obj 2Figure 2: Development set AP for several objec-tives on acoustic word discrimination.Objective Dev AP Dev AP(acoustic) (cross-view)obj00.659 0.791obj10.654 0.807obj20.675 0.788obj30.640 0.782obj0+obj20.702 0.814P3i=0obji0.672 0.804cost-sensitive 0.671 0.802Table 1: Word discrimination performancewith different objectives.Method Test AP Test AP(acoustic) (cross-view)MFCCs + DTW (Kamper et al., 2016) 0.214Correspondence autoencoder + DTW (Kamper et al., 2015) 0.469Phone posteriors + DTW (Carlin et al., 2011) 0.497Siamese CNN (Kamper et al., 2016) 0.549Siamese LSTM (Settle & Livescu, 2016) 0.671Our multi-view LSTM obj0+ obj20.806 0.892Table 2: Final test set AP for different word discrimination approaches. The first line is a baselineusing no word embeddings, but rather applying dynamic time warping (DTW) to the input MFCCfeatures. The second and third lines are prior results using no word embeddings (but rather usingDTW with learned correspondence autoencoder-based or phone posterior features, trained on largerexternal (in-domain) data). The remaining prior work corresponds to using cosine similarity betweenacoustic word embeddings.examples from the other view in the minibatch as negative examples (instead of random samplingone negative example as we do) to be contrasted with one paired example.Figure 2 shows the progression of the development set AP for acoustic word discrimination over1000 training epochs, using several of the objectives, where AP is evaluated every 5epochs. Weobserve that even after 1000 epochs, the development set AP has not quite saturated, indicating thatit may be possible to further improve performance.Overall, our best-performing objective is the combined obj0+obj2, and we use it for reporting finaltest-set results. Table 2 shows the test set AP for both the acoustic and cross-view tasks using ourfinal model (“multi-view LSTM”). For comparison, we also include acoustic word discriminationresults reported previously by Kamper et al. (2016); Settle & Livescu (2016). Previous approacheshave not addressed the problem of learning embeddings jointly with the text view, so they can notbe evaluated on the cross-view task.4.4 W ORD SIMILARITY TASKSTable 3 gives our results on the word similarity tasks, that is the rank correlation (Spearman’s ) be-tween embedding distances and orthographic edit distance (Levenshtein distance between charactersequences). We measure this correlation for both our acoustic word embeddings and for our textembeddings. In the case of the text embeddings, we could of course directly measure the Leven-shtein distance between the inputs; here we are simply measuring how much of this information thetext embeddings are able to retain.7Published as a conference paper at ICLR 2017Objective (acoustic embedding) (text embedding)fixed-margin ( obj0) 0.179 0.207cost-sensitive margin ( obj0) 0.240 0.270Table 3: Word similarity results using fixed-margin and cost-sensitive objectives, given as rankcorrelation (Spearman’s ) between embedding distances and orthographic edit distances.Interestingly, while the cost-sensitive objective did not produce substantial gains on the word dis-crimination tasks above, it does greatly improve the performance on this word similarity measure.This is a satisfying observation, since the cost-sensitive loss is trying to improve precisely this rela-tionship between distances in the embedding space and the orthographic edit distance.Although we have trained our embeddings using orthographic labels, it is also interesting to con-sider how closely aligned the embeddings are with the corresponding phonetic pronunciations. Forcomparison, the rank correlation between our acoustic embeddings and phonetic edit distances is0:226, and for our text embeddings it is 0:241, which are relatively close to the rank correlationswith orthographic edit distance. A future direction is to directly train embeddings with phoneticsequence supervision rather than orthography; this setting involves somewhat stronger supervision,but it is easy to obtain in many cases.Another interesting point is that the performance is not a great deal better for the text embeddingsthan for the acoustic embeddings, even though the text embeddings have at their disposal the textinput itself. We believe this has to do with the distribution of words in our data: While the dataincludes a large variety of words, it does not include many very similar pairs. In fact, of all pos-sible pairs of unique training set words, fewer than 2% have an edit distance below 5 characters.Therefore, there may not be sufficient information to learn to distinguish detailed differences amongcharacter sequences, and the cost-sensitive loss ultimately does not learn much more than to separatedifferent words. In future work it would be interesting to experiment with data sets that have a largervariety of similar words.4.5 V ISUALIZATION OF LEARNED EMBEDDINGSFigure 3 gives a 2-dimensional t-SNE (van der Maaten & Hinton, 2008) visualization of selectedacoustic and character sequences from the development set, including some that were seen in thetraining set and some previously unseen words. The previously seen words in this figure wereselected uniformly at random among those that appear at least 15 times in the development set(the unseen words are the only six that appear at least 15 times in the development set). Thisvisualization demonstrates that the acoustic embeddings cluster very tightly and are very close tothe text embeddings, and that unseen words cluster nearly as well as previously seen ones.While Figure 3 shows the relationship among the multiple acoustic embeddings and the text em-beddings, the words are all very different so we cannot draw conclusions about the relationshipsbetween words. Figure 4 provides another visualization, this time exploring the relationship amongthe text embeddings of a number of closely related words, namely all development set words end-ing in “-ly”, “-ing”, and “-tion”. This visualization confirms that related words are embedded closetogether, with the words sharing a suffix forming fairly well-defined clusters.5 C ONCLUSIONWe have presented an approach for jointly learning acoustic word embeddings and their orthographiccounterparts. This multi-view approach produces improved acoustic word embedding performanceover previous approaches, and also has the benefit that the same embeddings can be applied for bothspoken and written query tasks. We have explored a variety of contrastive objectives: ones with afixed margin that aim to separate same and different word pairs, as well as a cost-sensitive loss thataims to capture orthographic edit distances. While the losses generally perform similarly for worddiscrimination tasks, the cost-sensitive loss improves the correlation between embedding distancesand orthographic distances. One interesting direction for future work is to directly use knowledgeabout phonetic pronunciations, in both evaluation and training. Another direction is to extend ourapproach to directly train on both word and non-word segments.8Published as a conference paper at ICLR 2017−25 −20 −15 −10 −5 0 5 10 15−25−20−15−10−505101520somethingbusinessprogramdecidedgoodness serviceCAMPING RESTAURANTSCOLORADOATMOSPHERERANGERSMOUNTAINSFigure 3: Visualization via t-SNE of acoustic word embeddings (colored markers) and correspond-ing character sequence embeddings (text), for a set of development set words with at least 15 acoustictokens. Words seen in training are in lower-case; unseen words are in upper-case.−4 −2 0 2 4 6 810 12 14−15−10−5051015somethingapparentlyexactlyaccidentallyliterallyinterestingprobablydirectlytraditionpersonallyeverythingkiddingeventuallyquicklycombinationironicallybasicallypinpointingtuitionnovelizationeducationstimulatingwaitinggraduationconstructionsubscriptionincrediblyceilingpoisoningcollectionperfectlyimmediatelyexasperatingunravelingproperlyprobationoccasionallyprotectionhopefullysentencingpopulationspeculationbowlingtramplingintersectionspecificallydepressingdiminishingobviouslythrivingdeductioncommunicationideallyqualificationammunitionsatisfactionexposinglightlypositionrepresentationnaturallyrenumerationexhibitiongenerallypoliticallyrelativelythoroughlynominationdemonstrationinsulationessentiallyapplicationslidingfoundationdifferentlyspendingconfidentlyinterventionrememberingparkingdistantlydraftingrebuildingreputationstencilingincludingconventionrecruitingpurposelyweddingnonfictionadministeringshining consolidationpaddlingdeliberatelyassumptiondisturbingdebunkingexaggerationfinanciallyprotestingfallinguproariouslydiscriminationconcentrationoppositionunfairlyleisurelyevidentlysittingselectionholdingassassinationsanitationultimatelytestingreceptioncompensationaboundingpassingcommercializationfrighteningoutrageouslyrapidlyexplanationhistoricallydefendingemphaticallybarkingappealingconsequentlyreliablylettingbroadcastingadditioncompetingtouchy-feelysettingregulationlegislationattractionfaithfully interchangeablylecturingpreviouslyvacationingmediationoffensivelyinterestinglyFigure 4: Visualization via t-SNE of character sequence embeddings for words with the suffixes“-ly” (blue), “-ing” (red), and “-tion” (green).ACKNOWLEDGMENTSThis research was supported by a Google Faculty Award and by NSF grant IIS-1321015. Theopinions expressed in this work are those of the authors and do not necessarily reflect the views ofthe funding agency. This research used GPUs donated by NVIDIA Corporation. We thank HermanKamper and Shane Settle for their assistance with the data and experimental setup.9Published as a conference paper at ICLR 2017REFERENCESXavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Sz ̈oke, Andi Buzo, and Florian Metze. Queryby example search on speech at mediaeval 2014. In MediaEval , 2014.Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kings-bury. End-to-end ASR-free keyword search from speech. arXiv preprint arXiv:1701.04313 , 2017.Samy Bengio and Georg Heigold. Word embeddings for speech recognition. In IEEE Int. Conf.Acoustics, Speech and Sig. Proc. , 2014.Yoshua Bengio, R ́ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisticlanguage model. Journal of Machine Learing Research , 3(Feb):1137–1155, 2003.Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard S ̈ackinger, and Roopak Shah. Signature verifi-cation using a siamese time delay neural network. In Advances in Neural Information ProcessingSystems (NIPS) , pp. 737–744, 1993.Michael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. Rapid evaluation of speechrepresentations for spoken term discovery. In Proc. Interspeech , 2011.Guoguo Chen, Carolina Parada, and Tara N Sainath. Query-by-example keyword spotting usinglong short-term memory networks. In Proc. ICASSP , 2015.Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, withapplication to face verification. In IEEE Computer Society Conf. Computer Vision and PatternRecognition , pp. 539–546, 2005.Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, and Hung-Yi Lee. Unsupervised learning of audiosegment representations using sequence-to-sequence recurrent neural networks. In Proc. Inter-speech , 2016.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science , 41(6):391, 1990.Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, andEytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10thinternational conference on World Wide Web , 2001.Jonathan G Fiscus, Jerome Ajot, John S Garofolo, and George Doddingtion. Results of the 2006spoken term detection evaluation. In Proc. SIGIR , volume 7, pp. 51–57. Citeseer, 2007.Jort F Gemmeke, Tuomas Virtanen, and Antti Hurmalainen. Exemplar-based sparse representationsfor noise robust automatic speech recognition. IEEE Transactions on Acoustics, Speech, andLanguage Processing , 19(7):2067–2080, 2011.Sahar Ghannay, Yannick Esteve, Nathalie Camelin, and Paul Deleglise. Evaluation of acoustic wordembeddings. In Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP , 2016.John J Godfrey, Edward C Holliman, and Jane McDaniel. SWITCHBOARD: Telephone speechcorpus for research and development. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 1992.Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2013.Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariantmapping. In IEEE Computer Society Conf. Computer Vision and Pattern Recognition , 2006.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. InProc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) , 2015.David Harwath and James R Glass. Learning word-like units from joint audio-visual analysis. arXivpreprint arXiv:1701.07481 , 2017.10Published as a conference paper at ICLR 2017David Harwath, Antonio Torralba, and James Glass. Unsupervised learning of spoken language withvisual context. In Advances in Neural Information Processing Systems (NIPS) , 2016.Karl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without wordalignment. In Int. Conf. Learning Representations , 2014. arXiv:1312.6173 [cs.CL].Felix Hill, Roi Reichart, and Anna Korhonen. SimLex-999: Evaluating semantic models with (gen-uine) similarity estimation. Computational Linguistics , 41(4), 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architecturesfor matching natural language sentences. In Advances in Neural Information Processing Systems(NIPS) , 2014.Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum ́e III. Deep unordered com-position rivals syntactic methods for text classification. In Proc. Association for ComputationalLinguistics , 2015.Herman Kamper, Micah Elsner, Aren Jansen, and Sharon J. Goldwater. Unsupervised neural net-work based feature extraction using weak top-down constraints. In IEEE Int. Conf. Acoustics,Speech and Sig. Proc. , 2015.Herman Kamper, Weiran Wang, and Karen Livescu. Deep convolutional acoustic word embeddingsusing word-pair side information. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2016.Diederik Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In Int. Conf.Learning Representations , 2015.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information ProcessingSystems (NIPS) , 2015.Keith Levin, Katharine Henry, Aren Jansen, and Karen Livescu. Fixed-dimensional acoustic embed-dings of variable-length segments in low-resource settings. In Proc. IEEE Workshop on AutomaticSpeech Recognition and Understanding (ASRU) , 2013.Keith Levin, Aren Jansen, and Benjamin Van Durme. Segmental acoustic indexing for zero resourcekeyword search. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2015.Andrew L Maas, Stephen D Miller, Tyler M O’neil, Andrew Y Ng, and Patrick Nguyen. Word-levelacoustic modeling with convolutional vector regression. In Proc. ICML Workshop on Represen-tation Learning , 2012.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013.Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling.InICML , 2007.Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. Multimodaldeep learning. In ICML , pp. 689–696, 2011.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for wordrepresentation. In Proc. Conference on Empirical Methods in Natural Language Processing ,2014.Shane Settle and Karen Livescu. Discriminative acoustic word embeddings: Recurrent neuralnetwork-based approaches. In Proc. IEEE Workshop on Spoken Language Technology (SLT) ,2016.11Published as a conference paper at ICLR 2017Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng.Grounded compositional semantics for finding and describing images with sentences. Trans-actions of the Association for Computational Linguistics , 2:207–218, 2014.Kihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variationof information. In Advances in Neural Information Processing Systems (NIPS) , pp. 2141–2149,2014.Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines.Journal of Machine Learing Research , pp. 2949–2980, 2014.Gabriel Synnaeve, Thomas Schatz, and Emmanuel Dupoux. Phonetics embedding learning withside information. In Proc. IEEE Workshop on Spoken Language Technology (SLT) , 2014.Laurens J. P. van der Maaten and Geoffrey E. Hinton. Visualizing data using t-SNE. Journal ofMachine Learing Research , 9:2579–2605, November 2008.Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images andlanguage. In Int. Conf. Learning Representations , 2016.Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representationlearning. In ICML , pp. 1083–1092, 2015.John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrasticsentence embeddings. In Int. Conf. Learning Representations , 2016.12Published as a conference paper at ICLR 2017A A DDITIONAL ANALYSISWe first explore the effect of network architectures for our embedding models. We learn embeddingsusing objective obj0and evaluate them on the acoustic and cross-view word discrimination tasks.The resulting average precisions on the development set are given in Table 4. All of the modelswere trained for 1000 epochs, except for the 1-layer unidirectional models which converged after500 epochs. It is clear that bidirectional LSTMs are more successful than unidirectional LSTMsfor these tasks, and two layers of LSTMs are much better than a single layer of LSTMs. We didnot observe significant further improvement by using more than two layers of LSTMs. For all otherexperiments, we fix the architecture to 2-layer bidirectional LSTMs for each view.Architecture Dev AP Dev AP(acoustic word discrimination) (cross-view word discrimination)1-layer unidirectional 0.379 0.6161-layer bidirectional 0.466 0.6902-layer bidirectional 0.659 0.791Table 4: Average precision (AP) for acoustic and cross-view word discrimination tasks on the de-velopment set, using embeddings learned with objective obj0and different LSTM architectures.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.20.40.60.81.0Precision−5 0 5 10 15 20Orthographic edit distances−0.20.00.20.40.60.81.01.21.4Embeddings cosine distancesFigure 5: Precision-recall curve (left: two-layer bidirectional LSTM trained with obj0+ obj2forword discrimination task) and scatter plot of embedding distances vs. orthographic distances (right:cost-sensitive margin model for word similarity task), for our best embedding models.In Figure 5 we also give the precision-recall curve for our best models, as well as the scatter plot ofcosine distances between acoustic embeddings vs. orthographic edit distances.13
rJmOdpVEl
rJxDkvqee
ICLR.cc/2017/conference/-/paper347/official/review
{"title": "The paper investigates jointly trained acoustic and character level word embeddings, but only on a very small task.", "rating": "5: Marginally below acceptance threshold", "review": "Pros:\n Interesting training criterion.\nCons:\n Missing proper ASR technique based baselines.\n\nComments:\n The dataset is quite small.\n ROC curves for detection, and more measurements, e.g. EER would probably be helpful besides AP.\n More detailed analysis of the results would be necessary, e.g. precision of words seen during training compared to the detection\n performance of out-of-vocabulary words.\n It would be interesting to show scatter plots for embedding vs. orthographic distances.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Multi-view Recurrent Neural Acoustic Word Embeddings
["Wanjia He", "Weiran Wang", "Karen Livescu"]
Recent work has begun exploring neural acoustic word embeddings–fixed dimensional vector representations of arbitrary-length speech segments corresponding to words. Such embeddings are applicable to speech retrieval and recognition tasks, where reasoning about whole words may make it possible to avoid ambiguous sub-word representations. The main idea is to map acoustic sequences to fixed-dimensional vectors such that examples of the same word are mapped to similar vectors, while different-word examples are mapped to very different vectors. In this work we take a multi-view approach to learning acoustic word embeddings, in which we jointly learn to embed acoustic sequences and their corresponding character sequences. We use deep bidirectional LSTM embedding models and multi-view contrastive losses. We study the effect of different loss variants, including fixed-margin and cost-sensitive losses. Our acoustic word embeddings improve over previous approaches for the task of word discrimination. We also present results on other tasks that are enabled by the multi-view approach, including cross-view word discrimination and word similarity.
["acoustic sequences", "examples", "acoustic word embeddings", "word discrimination", "neural acoustic word", "dimensional vector representations", "speech segments", "words"]
https://openreview.net/forum?id=rJxDkvqee
https://openreview.net/pdf?id=rJxDkvqee
https://openreview.net/forum?id=rJxDkvqee&noteId=rJmOdpVEl
Published as a conference paper at ICLR 2017MULTI-VIEW RECURRENT NEURALACOUSTIC WORD EMBEDDINGSWanjia HeDepartment of Computer ScienceUniversity of ChicagoChicago, IL 60637, USAwanjia@ttic.eduWeiran Wang & Karen LivescuToyota Technological Institute at ChicagoChicago, IL 60637, USAfweiranwang,klivescu g@ttic.eduABSTRACTRecent work has begun exploring neural acoustic word embeddings—fixed-dimensional vector representations of arbitrary-length speech segments corre-sponding to words. Such embeddings are applicable to speech retrieval and recog-nition tasks, where reasoning about whole words may make it possible to avoidambiguous sub-word representations. The main idea is to map acoustic sequencesto fixed-dimensional vectors such that examples of the same word are mappedto similar vectors, while different-word examples are mapped to very differentvectors. In this work we take a multi-view approach to learning acoustic wordembeddings, in which we jointly learn to embed acoustic sequences and their cor-responding character sequences. We use deep bidirectional LSTM embeddingmodels and multi-view contrastive losses. We study the effect of different lossvariants, including fixed-margin and cost-sensitive losses. Our acoustic word em-beddings improve over previous approaches for the task of word discrimination.We also present results on other tasks that are enabled by the multi-view approach,including cross-view word discrimination and word similarity.1 I NTRODUCTIONWord embeddings—continuous-valued vector representations of words—are an almost ubiquitouscomponent of recent natural language processing (NLP) research. Word embeddings can be learnedusing spectral methods (Deerwester et al., 1990) or, more commonly in recent work, via neuralnetworks (Bengio et al., 2003; Mnih & Hinton, 2007; Mikolov et al., 2013; Pennington et al.,2014). Word embeddings can also be composed to form embeddings of phrases, sentences, ordocuments (Socher et al., 2014; Kiros et al., 2015; Wieting et al., 2016; Iyyer et al., 2015).In typical NLP applications, such embeddings are intended to represent the semantics of the cor-responding words/sequences. In contrast, embeddings that represent the way a word or sequencesounds are rarely considered. In this work we address this problem, starting with embeddings of in-dividual words. Such embeddings could be useful for tasks like spoken term detection (Fiscus et al.,2007), spoken query-by-example search (Anguera et al., 2014), or even speech recognition usinga whole-word approach (Gemmeke et al., 2011; Bengio & Heigold, 2014). In tasks that involvecomparing speech segments to each other, vector embeddings can allow more efficient and more ac-curate distance computation than sequence-based approaches such as dynamic time warping (Levinet al., 2013, 2015; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016).We consider the problem of learning vector representations of acoustic sequences and orthographic(character) sequences corresponding to single words, such that the learned embeddings representthe way the word sounds. We take a multi-view approach, where we jointly learn the embeddingsfor character and acoustic sequences. We consider several contrastive losses, based on learningfrom pairs of matched acoustic-orthographic examples and randomly drawn mismatched pairs. Thelosses correspond to different goals for learning such embeddings; for example, we might want theembeddings of two waveforms to be close when they correspond to the same word and far when theycorrespond to different ones, or we might want the distances between embeddings to correspond tosome ground-truth orthographic edit distance.1Published as a conference paper at ICLR 2017One of the useful properties of this multi-view approach is that, unlike earlier work on acoustic wordembeddings, it produces both acoustic and orthographic embeddings that can be directly compared.This makes it possible to use the same learned embeddings for multiple single-view and cross-viewtasks. Our multi-view embeddings produce improved results over earlier work on acoustic worddiscrimination, as well as encouraging results on cross-view discrimination and word similarity.12 O UR APPROACHIn this section, we first introduce our approach for learning acoustic word embeddings in a multi-view setting, after briefly reviewing related approaches to put ours in context. We then discussthe particular neural network architecture we use, based on bidirectional long short-term memory(LSTM) networks (Hochreiter & Schmidhuber, 1997).2.1 M ULTI -VIEW LEARNING OF ACOUSTIC WORD EMBEDDINGSPrevious approaches have focused on learning acoustic word embeddings in a “single-view” setting.In the simplest approach, one uses supervision of the form “acoustic segment xis an instance ofthe word y”, and trains the embedding to be discriminative of the word identity. Formally, given adataset of paired acoustic segments and word labels f(xi;yi)gNi=1, this approach solves the follow-ing optimization:minf;hobjclassify :=1NNXi`(h(f(xi));yi); (1)where network fmaps an acoustic segment into a fixed-dimensional feature vector/embedding, hisa classifier that predicts the corresponding word label from the label set of the training data, and theloss`measures the discrepancy between the prediction and ground-truth word label (one can useany multi-class classification loss here, and a typical choice is the cross-entropy loss where hhas asoftmax top layer). The two networks fandhare trained jointly. Equivalently, one could considerthe composition h(f(x))as a classifier network, and use any intermediate layer’s activations as thefeatures. We refer to the objective in (1) as the “classifier network” objective, which has been used inseveral prior studies on acoustic word embeddings (Bengio & Heigold, 2014; Kamper et al., 2016;Settle & Livescu, 2016).This objective, however, is not ideal for learning acoustic word embeddings. This is because theset of possible word labels is huge, and we may not have enough instances of each label to traina good classifier. In downstream tasks, we may encounter acoustic segments of words that did notappear in the embedding training set, and it is not clear that the classifier-based embeddings willhave reasonable behavior on previously unseen words.An alternative approach, based on Siamese networks (Bromley et al., 1993), uses supervision of theform “segment x1is similar to segment x2, and is not similar to segment x3”, where two segmentsare considered similar if they have the same word label and dissimilar otherwise. Models basedon Siamese networks have been used for a variety of representation learning problems in NLP (Huet al., 2014; Wieting et al., 2016), vision (Hadsell et al., 2006), and speech (Synnaeve et al., 2014;Kamper et al., 2015) including acoustic word embeddings (Kamper et al., 2016; Settle & Livescu,2016). A typical objective in this category enforces that the distance between (x1;x3)is larger thanthe distance between (x1;x2)by some margin:minfobjsiamese :=1NNXimax0; m+disf(x1i); f(x2i)disf(x1i); f(x3i);(2)where the network fextracts the fixed-dimensional embedding, the distance function dis(;)mea-sures the distance between the two embedding vectors, and m> 0is the margin parameter. The term“Siamese” (Bromley et al., 1993; Chopra et al., 2005) refers to the fact that the triplet (x1;x2;x3)share the same embedding network f.Unlike the classification-based loss, the Siamese network loss does not enforce hard decisions onthe label of each segment. Instead it tries to learn embeddings that respect distances between word1Our tensorflow implementation is available athttps://github.com/opheadacheh/Multi-view-neural-acoustic-words-embeddings2Published as a conference paper at ICLR 2017pairs, which can be helpful for dealing with unseen words. The Siamese network approach also usesmore examples in training, as one can easily generate many more triplets than (segment, label) pairs,and it is not limited to those labels that occur a sufficient number of times in the training set.The above approaches treat the word labels as discrete classes, which ignores the similarity betweendifferent words, and does not take advantage of the more complex information contained in thecharacter sequences corresponding to word labels. The orthography naturally reflects some aspectsof similarity between the words’ pronunciations, which should also be reflected in the acousticembeddings. One way to learn features from multiple sources of complementary information isusing a multi-view representation learning setting. We take this approach, and consider the acousticsegment and the character sequence to be two different views of the pronunciation of the word.While many deep multi-view learning objectives are applicable (Ngiam et al., 2011; Srivastava &Salakhutdinov, 2014; Sohn et al., 2014; Wang et al., 2015), we consider the multi-view contrastiveloss objective of (Hermann & Blunsom, 2014), which is simple to optimize and implement andperforms well in practice. In this algorithm, we embed acoustic segments xby a network fandcharacter label sequences cby another network ginto a common space, and use weak supervi-sion of the form “for paired segment x+and its character label sequence c+, the distance betweentheir embedding is much smaller than the distance between embeddings of x+and an unmatchedcharacter label sequence c”. Formally, we optimize the following objective with such supervision:minf;gobj0:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); g(ci); (3)where ciis a negative character label sequence of x+ito be contrasted with the positive/correctcharacter sequence c+i, andmis the margin parameter. In this paper we use the cosine distance,dis(a;b) = 1Dakak;bkbkE.2Note that in the multi-view setting, we have multiple ways of generating triplets that contain onepositive pair and one negative pair each. Below are the other three objectives we explore in thispaper:minf;gobj1:=1NNXimax0; m+disf(x+i); g(c+i)disg(c+i); g(ci); (4)minf;gobj2:=1NNXimax0; m+disf(x+i); g(c+i)disf(xi); g(c+i); (5)minf;gobj3:=1NNXimax0; m+disf(x+i); g(c+i)disf(x+i); f(xi): (6)xiin (5) and (6) refers to a negative acoustic feature sequence, that is one with a different labelfrom x+i. We note that obj1andobj3contain distances between same-view embeddings, and areless thoroughly explored in the literature. We will also consider combinations of obj0through obj3.Finally, thus far we have considered losses that do not explicitly take into account the degree ofdifference between the positive and negative pairs (although the learned embeddings may implicitlylearn this through the relationship between sequences in the two views). We also consider a cost-sensitive objective designed to explicitly arrange the embedding space such that word similarity isrespected. In (3), instead of a fixed margin m, we use:m(c+;c) :=mmaxmin (tmax; editdis (c+;c))tmax; (7)wheretmax>0is a threshold for edit distances (all edit distances above tmaxare considered equallybad), andmmax is the maximum margin we impose. The margin is set to mmax if the edit distancebetween two character sequences is above tmax; otherwise it scales linearly with the edit distanceeditdis (c+;c)). We use the Levenshtein distance as the edit distance. Here we explore the cost-sensitive margin with obj0, but it could in principle be used with other objectives as well.2In experiments, we use the unit-length vectorakakas the embedding. It tends to perform better than f(x)and more directly reflects the cosine similarity. This is equivalent to adding a nonlinear normalization layer ontop of f.3Published as a conference paper at ICLR 2017LSTMcellrecurrentconnectionsinputacousticfeaturesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellstackedlayersx"xLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellf$xf%xLSTMcellLSTMcellLSTMcellLSTMcellfx=[f%xf'x]g(c)=[g%cg'c]f(x-.)g(c-.)g(c-/)outputacousticembeddingoutputcharactersembeddingLSTMcellrecurrentconnectionsinputcharactersequencesLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellc"cLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellLSTMcellg$cg%cowrd1000001001000001Figure 1: Illustration of our embedding architecture and contrastive multi-view approach.2.2 R ECURRENT NEURAL NETWORK ARCHITECTURESince the inputs of both views have a sequential structure, we implement both fandgwith recur-rent neural networks and in particular long-short term memory networks (LSTMs). Recurrent neu-ral networks are the state-of-the-art models for a number of speech tasks including speech recogni-tion Graves et al. (2013), and LSTM-based acoustic word embeddings have produced the best resultson one of the tasks in our experiments (Settle & Livescu, 2016).As shown in Figure 1, our fandgare produced by multi-layer (stacked) bidirectional LSTMs.The inputs can be any frame-level acoustic feature representation and vector representation of thecharacters in the orthographic input. At each layer, two LSTM cells process the input sequence fromleft to right and from right to left respectively. At intermediate layers, the outputs of the two LSTMsat each time step are concatenated to form the input sequence to the next layer. At the top layer, thelast time step outputs of the two LSTMs are concatenated to form a fixed-dimensional embeddingof the view, and the embeddings are then used to calculate the cosine distances in our objectives.3 R ELATED WORKWe are aware of no prior work on multi-view learning of acoustic and character-based word embed-dings. However, acoustic word embeddings learned in other ways have recently begun to be studied.Levin et al. (2013) proposed an approach for embedding an arbitrary-length segment of speech asa fixed-dimensional vector, based on representing each word as a vector of dynamic time warping(DTW) distances to a set of template words. This approach produced improved performance on aword discrimination task compared to using raw DTW distances, and was later also applied success-fully for a query-by-example task (Levin et al., 2015). One disadvantage of this approach is that,while DTW handles the issue of variable sequence lengths, it is computationally costly and involvesa number of DTW parameters that are not learned.Kamper et al. (2016) and Settle & Livescu (2016) later improved on Levin et al. ’s word discrimi-nation results using convolutional neural networks (CNNs) and recurrent neural networks (RNNs)trained with either a classification or contrastive loss. Bengio & Heigold (2014) trained convolu-tional neural network (CNN)-based acoustic word embeddings for rescoring the outputs of a speechrecognizer, using a loss combining classification and ranking criteria. Maas et al. (2012) traineda CNN to predict a semantic word embedding from an acoustic segment, and used the resultingembeddings as features in a segmental word-level speech recognizer. Harwath and Glass Harwath& Glass (2015); Harwath et al. (2016); Harwath & Glass (2017) jointly trained CNN embeddingsof images and spoken captions, and showed that word-like unit embeddings can be extracted fromthe speech model. CNNs require normalizing the duration of the input sequences, which has typ-ically been done via padding. RNNs, on the other hand, are more flexible in dealing with verydifferent-length sequences. Chen et al. (2015) used long short-term memory (LSTM) networks witha classification loss to embed acoustic words for a simple (single-query) query-by-example searchtask. Chung et al. (2016) learned acoustic word embeddings based on recurrent neural network(RNN) autoencoders, and found that they improve over DTW for a word discrimination task similarto that of Levin et al. (2013). Audhkhasi et al. (2017) learned autoencoders for acoustic and writtenwords, as well as a model for comparing the two, and applied these to a keyword search task.4Published as a conference paper at ICLR 2017Evaluation of acoustic word embeddings in downstream tasks such as speech recognition and searchcan be costly, and can obscure details of embedding models and training approaches. Most eval-uations have been based on word discrimination – the task of determining whether two speechsegments correspond to the same word or not – which can be seen as a proxy for query-by-examplesearch (Levin et al., 2013; Kamper et al., 2016; Settle & Livescu, 2016; Chung et al., 2016). Onedifference between word discrimination and search/recognition tasks is that in word discriminationthe word boundaries are given. However, prior work has been able to apply results from word dis-crimination Levin et al. (2013) to improve a query-by-example system without known word bound-aries Levin et al. (2015), by simply applying their embeddings to non-word segments as well.The only prior work focused on vector embeddings of character sequences explicitly aimed at repre-senting their acoustic similarity is that of Ghannay et al. (2016), who proposed evaluations based onnearest-neighbor retrieval, phonetic/orthographic similarity measures, and homophone disambigua-tion. We use related tasks here, as well as acoustic word discrimination for comparison with priorwork on acoustic embeddings.4 E XPERIMENTS AND RESULTSThe ultimate goal is to gain improvements in speech systems where word-level discrimination isneeded, such as speech recognition and query-by-example search. However, in order to focus on thecontent of the embeddings themselves and to more quickly compare a variety of models, it is desir-able to have surrogate tasks that serve as intrinsic measures of performance. Here we consider threeforms of evaluation, all based on measuring whether cosine distances between learned embeddingscorrespond well to desired properties.In the first task, acoustic word discrimination , we are given a pair of acoustic sequences andmust decide whether they correspond to the same word or to different words. This task has beenused in several prior papers on acoustic word embeddings Kamper et al. (2015, 2016); Chung et al.(2016); Settle & Livescu (2016) and is a proxy for query-by-example search. For each given spokenword pair, we calculate the cosine distance between their embeddings. If the cosine distance isbelow a threshold, we output “yes” (same word), otherwise we output “no” (different words). Theperformance measure is the average precision (AP), which is the area under the precision-recallcurve generated by varying the threshold and has a maximum value of 1.In our multi-view setup, we embed not only the acoustic words but also the character sequences.This allows us to use our embeddings also for tasks involving comparisons between written andspoken words. For example, the standard task of spoken term detection (Fiscus et al., 2007) involvessearching for examples of a given text query in spoken documents. This task is identical to query-by-example except that the query is given as text. In order to explore the potential of multi-viewembeddings for such tasks, we design another proxy task, cross-view word discrimination . Herewe are given a pair of inputs, one a written word and one an acoustic word segment, and our taskis to determine if the acoustic signal is an example of the written word. The evalution proceedsanalogously to the acoustic word discrimination task: We output “yes” if the cosine distance be-tween the embeddings of the written and spoken sequences are below some threshold, and measureperformance as the average precision (AP) over all thresholds.Finally, we also would like to obtain a more fine-grained measure of whether the learned embeddingscapture our intuitive sense of similarity between words. Being able to capture word similarity mayalso be useful in building query or recognition systems that fail gracefully and produce human-like errors. For this purpose we measure the rank correlation between embedding distances andcharacter edit distances. This is analogous to the evaluation of semantic word embeddings via therank correlation between embedding distances and human similarity judgments (Finkelstein et al.,2001; Hill et al., 2015). In our case, however, we do not use human judgments since the ground-truthedit distances themselves provide a good measure. We refer to this as the word similarity task,and we apply this measure to both pairs of acoustic embeddings and pairs of character sequenceembeddings. Similar measures have been proposed by Ghannay et al. (2016) to evaluate acousticword embeddings, although they considered only near neighbors of each word whereas we considerthe correlation across the full range of word pairs.5Published as a conference paper at ICLR 2017In the experiments described below, we first focus on the acoustic word discrimination task for pur-poses of initial exploration and hyperparameter search, and then largely fix the models for evaluationusing the cross-view word discrimination and word similarity measures.4.1 D ATAWe use the same experimental setup and data as in Kamper et al. (2015, 2016); Settle & Livescu(2016). The task and setup were first developed by (Carlin et al., 2011). The data is drawn fromthe Switchboard English conversational speech corpus (Godfrey et al., 1992). The spoken wordsegments range in duration from 50 to 200 frames (0.5 - 2 seconds). The train/dev/test splitscontain 9971/10966/11024 pairs of acoustic segments and character sequences, corresponding to1687/3918/3390 unique words. In computing the AP for the dev or test set, all pairs in the set areused, yielding approximately 60 million word pairs.The input to the embedding model in the acoustic view is a sequence of 39-dimensional vectors(one per frame) of standard mel frequency cepstral coefficients (MFCCs) and their first and secondderivatives. The input to the character sequence embedding model is a sequence of 26-dimensionalone-hot vectors indicating each character of the word’s orthography.4.2 M ODEL DETAILS AND HYPERPARAMETER TUNINGWe experiment with different neural network architectures for each view, varying the number ofstacked LSTM layers, the number of hidden units for each layer, and the use of single- or bidirec-tional LSTM cells. A coarse grid search shows that 2-layer bidirectional LSTMs with 512 hiddenunits per direction per layer perform well on the acoustic word discrimination task, and we keepthis structure fixed for subsequent experiments (see Appendix A for more details). We use the out-puts of the top-layer LSTMs as the learned embedding for each view, which is 1024-dimensional ifbidirectional LSTMs are used.In training, we use dropout on the inputs of the acoustic view and between stacked layers for bothviews. The architecture is illustrated in Figure 1. For each training example, our contrastive lossesrequire a corresponding negative example. We generate a negative character label sequence by uni-formly sampling a word label from the training set that is different from the positive label. Weperform a new negative label sampling at the beginning of each epoch. Similarly, negative acousticfeature sequences are uniformly sampled from all of the differently labeled acoustic feature se-quences in the training set.The network weights are initialized with values sampled uniformly from the range [0:05;0:05].We use the Adam optimizer (Kingma & Ba, 2015) for updating the weights using mini-batches of20 acoustic segments, with an initial learning rate tuned over f0:0001;0:001g. Dropout is used ateach layer, with the rate tuned over f0;0:2;0:4;0:5g, in which 0:4usually outperformed others.The margin in our basic contrastive objectives 0-3 is tuned over f0:3;0:4;0:5;0:6;0:7g, out ofwhich 0:4and0:5typically yield best results. For obj0with the cost-sensitive margin, we tune themaximum margin mmax overf0:5;0:6;0:7gand the threshold tmax overf9;11;13g. We traineach model for up to 1000 epochs. The model that gives the best AP on the development set is usedfor evaluation on the test set.4.3 E FFECTS OF DIFFERENT OBJECTIVESWe presented four contrastive losses (3)–(6) and potential combinations in Section 2.1. We nowexplore the effects of these different objectives on the word discrimination tasks.Table 1 shows the development set AP for acoustic and cross-view word discrimination achievedusing the various objectives. We tuned the objectives for the acoustic discrimination task, and thenused the corresponding converged models for the cross-view task. Of the simple contrastive objec-tives, obj0andobj2(which involve only cross-view distances) slightly outperform the other two onthe acoustic word discrimination task. The best-performing objective is the “symmetrized” objectiveobj0+ obj2, which significantly outperforms all individual objectives (and the combination of thefour). Finally, the cost-sensitive objective is very competitive as well, while falling slightly shortof the best performance. We note that a similar objective to our obj0+ obj2was used by Vendrovet al. (2016) for the task of caption-image retrieval, where the authors essentially use all non-paired6Published as a conference paper at ICLR 20170 200 400 600 800 1000Epochs0.00.10.20.30.40.50.60.70.8Average Precision on Devobj 0obj 2obj 0 + obj 2Figure 2: Development set AP for several objec-tives on acoustic word discrimination.Objective Dev AP Dev AP(acoustic) (cross-view)obj00.659 0.791obj10.654 0.807obj20.675 0.788obj30.640 0.782obj0+obj20.702 0.814P3i=0obji0.672 0.804cost-sensitive 0.671 0.802Table 1: Word discrimination performancewith different objectives.Method Test AP Test AP(acoustic) (cross-view)MFCCs + DTW (Kamper et al., 2016) 0.214Correspondence autoencoder + DTW (Kamper et al., 2015) 0.469Phone posteriors + DTW (Carlin et al., 2011) 0.497Siamese CNN (Kamper et al., 2016) 0.549Siamese LSTM (Settle & Livescu, 2016) 0.671Our multi-view LSTM obj0+ obj20.806 0.892Table 2: Final test set AP for different word discrimination approaches. The first line is a baselineusing no word embeddings, but rather applying dynamic time warping (DTW) to the input MFCCfeatures. The second and third lines are prior results using no word embeddings (but rather usingDTW with learned correspondence autoencoder-based or phone posterior features, trained on largerexternal (in-domain) data). The remaining prior work corresponds to using cosine similarity betweenacoustic word embeddings.examples from the other view in the minibatch as negative examples (instead of random samplingone negative example as we do) to be contrasted with one paired example.Figure 2 shows the progression of the development set AP for acoustic word discrimination over1000 training epochs, using several of the objectives, where AP is evaluated every 5epochs. Weobserve that even after 1000 epochs, the development set AP has not quite saturated, indicating thatit may be possible to further improve performance.Overall, our best-performing objective is the combined obj0+obj2, and we use it for reporting finaltest-set results. Table 2 shows the test set AP for both the acoustic and cross-view tasks using ourfinal model (“multi-view LSTM”). For comparison, we also include acoustic word discriminationresults reported previously by Kamper et al. (2016); Settle & Livescu (2016). Previous approacheshave not addressed the problem of learning embeddings jointly with the text view, so they can notbe evaluated on the cross-view task.4.4 W ORD SIMILARITY TASKSTable 3 gives our results on the word similarity tasks, that is the rank correlation (Spearman’s ) be-tween embedding distances and orthographic edit distance (Levenshtein distance between charactersequences). We measure this correlation for both our acoustic word embeddings and for our textembeddings. In the case of the text embeddings, we could of course directly measure the Leven-shtein distance between the inputs; here we are simply measuring how much of this information thetext embeddings are able to retain.7Published as a conference paper at ICLR 2017Objective (acoustic embedding) (text embedding)fixed-margin ( obj0) 0.179 0.207cost-sensitive margin ( obj0) 0.240 0.270Table 3: Word similarity results using fixed-margin and cost-sensitive objectives, given as rankcorrelation (Spearman’s ) between embedding distances and orthographic edit distances.Interestingly, while the cost-sensitive objective did not produce substantial gains on the word dis-crimination tasks above, it does greatly improve the performance on this word similarity measure.This is a satisfying observation, since the cost-sensitive loss is trying to improve precisely this rela-tionship between distances in the embedding space and the orthographic edit distance.Although we have trained our embeddings using orthographic labels, it is also interesting to con-sider how closely aligned the embeddings are with the corresponding phonetic pronunciations. Forcomparison, the rank correlation between our acoustic embeddings and phonetic edit distances is0:226, and for our text embeddings it is 0:241, which are relatively close to the rank correlationswith orthographic edit distance. A future direction is to directly train embeddings with phoneticsequence supervision rather than orthography; this setting involves somewhat stronger supervision,but it is easy to obtain in many cases.Another interesting point is that the performance is not a great deal better for the text embeddingsthan for the acoustic embeddings, even though the text embeddings have at their disposal the textinput itself. We believe this has to do with the distribution of words in our data: While the dataincludes a large variety of words, it does not include many very similar pairs. In fact, of all pos-sible pairs of unique training set words, fewer than 2% have an edit distance below 5 characters.Therefore, there may not be sufficient information to learn to distinguish detailed differences amongcharacter sequences, and the cost-sensitive loss ultimately does not learn much more than to separatedifferent words. In future work it would be interesting to experiment with data sets that have a largervariety of similar words.4.5 V ISUALIZATION OF LEARNED EMBEDDINGSFigure 3 gives a 2-dimensional t-SNE (van der Maaten & Hinton, 2008) visualization of selectedacoustic and character sequences from the development set, including some that were seen in thetraining set and some previously unseen words. The previously seen words in this figure wereselected uniformly at random among those that appear at least 15 times in the development set(the unseen words are the only six that appear at least 15 times in the development set). Thisvisualization demonstrates that the acoustic embeddings cluster very tightly and are very close tothe text embeddings, and that unseen words cluster nearly as well as previously seen ones.While Figure 3 shows the relationship among the multiple acoustic embeddings and the text em-beddings, the words are all very different so we cannot draw conclusions about the relationshipsbetween words. Figure 4 provides another visualization, this time exploring the relationship amongthe text embeddings of a number of closely related words, namely all development set words end-ing in “-ly”, “-ing”, and “-tion”. This visualization confirms that related words are embedded closetogether, with the words sharing a suffix forming fairly well-defined clusters.5 C ONCLUSIONWe have presented an approach for jointly learning acoustic word embeddings and their orthographiccounterparts. This multi-view approach produces improved acoustic word embedding performanceover previous approaches, and also has the benefit that the same embeddings can be applied for bothspoken and written query tasks. We have explored a variety of contrastive objectives: ones with afixed margin that aim to separate same and different word pairs, as well as a cost-sensitive loss thataims to capture orthographic edit distances. While the losses generally perform similarly for worddiscrimination tasks, the cost-sensitive loss improves the correlation between embedding distancesand orthographic distances. One interesting direction for future work is to directly use knowledgeabout phonetic pronunciations, in both evaluation and training. Another direction is to extend ourapproach to directly train on both word and non-word segments.8Published as a conference paper at ICLR 2017−25 −20 −15 −10 −5 0 5 10 15−25−20−15−10−505101520somethingbusinessprogramdecidedgoodness serviceCAMPING RESTAURANTSCOLORADOATMOSPHERERANGERSMOUNTAINSFigure 3: Visualization via t-SNE of acoustic word embeddings (colored markers) and correspond-ing character sequence embeddings (text), for a set of development set words with at least 15 acoustictokens. Words seen in training are in lower-case; unseen words are in upper-case.−4 −2 0 2 4 6 810 12 14−15−10−5051015somethingapparentlyexactlyaccidentallyliterallyinterestingprobablydirectlytraditionpersonallyeverythingkiddingeventuallyquicklycombinationironicallybasicallypinpointingtuitionnovelizationeducationstimulatingwaitinggraduationconstructionsubscriptionincrediblyceilingpoisoningcollectionperfectlyimmediatelyexasperatingunravelingproperlyprobationoccasionallyprotectionhopefullysentencingpopulationspeculationbowlingtramplingintersectionspecificallydepressingdiminishingobviouslythrivingdeductioncommunicationideallyqualificationammunitionsatisfactionexposinglightlypositionrepresentationnaturallyrenumerationexhibitiongenerallypoliticallyrelativelythoroughlynominationdemonstrationinsulationessentiallyapplicationslidingfoundationdifferentlyspendingconfidentlyinterventionrememberingparkingdistantlydraftingrebuildingreputationstencilingincludingconventionrecruitingpurposelyweddingnonfictionadministeringshining consolidationpaddlingdeliberatelyassumptiondisturbingdebunkingexaggerationfinanciallyprotestingfallinguproariouslydiscriminationconcentrationoppositionunfairlyleisurelyevidentlysittingselectionholdingassassinationsanitationultimatelytestingreceptioncompensationaboundingpassingcommercializationfrighteningoutrageouslyrapidlyexplanationhistoricallydefendingemphaticallybarkingappealingconsequentlyreliablylettingbroadcastingadditioncompetingtouchy-feelysettingregulationlegislationattractionfaithfully interchangeablylecturingpreviouslyvacationingmediationoffensivelyinterestinglyFigure 4: Visualization via t-SNE of character sequence embeddings for words with the suffixes“-ly” (blue), “-ing” (red), and “-tion” (green).ACKNOWLEDGMENTSThis research was supported by a Google Faculty Award and by NSF grant IIS-1321015. Theopinions expressed in this work are those of the authors and do not necessarily reflect the views ofthe funding agency. This research used GPUs donated by NVIDIA Corporation. We thank HermanKamper and Shane Settle for their assistance with the data and experimental setup.9Published as a conference paper at ICLR 2017REFERENCESXavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Sz ̈oke, Andi Buzo, and Florian Metze. Queryby example search on speech at mediaeval 2014. In MediaEval , 2014.Kartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kings-bury. End-to-end ASR-free keyword search from speech. arXiv preprint arXiv:1701.04313 , 2017.Samy Bengio and Georg Heigold. Word embeddings for speech recognition. In IEEE Int. Conf.Acoustics, Speech and Sig. Proc. , 2014.Yoshua Bengio, R ́ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisticlanguage model. Journal of Machine Learing Research , 3(Feb):1137–1155, 2003.Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard S ̈ackinger, and Roopak Shah. Signature verifi-cation using a siamese time delay neural network. In Advances in Neural Information ProcessingSystems (NIPS) , pp. 737–744, 1993.Michael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. Rapid evaluation of speechrepresentations for spoken term discovery. In Proc. Interspeech , 2011.Guoguo Chen, Carolina Parada, and Tara N Sainath. Query-by-example keyword spotting usinglong short-term memory networks. In Proc. ICASSP , 2015.Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, withapplication to face verification. In IEEE Computer Society Conf. Computer Vision and PatternRecognition , pp. 539–546, 2005.Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, and Hung-Yi Lee. Unsupervised learning of audiosegment representations using sequence-to-sequence recurrent neural networks. In Proc. Inter-speech , 2016.Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.Indexing by latent semantic analysis. Journal of the American society for information science , 41(6):391, 1990.Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, andEytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10thinternational conference on World Wide Web , 2001.Jonathan G Fiscus, Jerome Ajot, John S Garofolo, and George Doddingtion. Results of the 2006spoken term detection evaluation. In Proc. SIGIR , volume 7, pp. 51–57. Citeseer, 2007.Jort F Gemmeke, Tuomas Virtanen, and Antti Hurmalainen. Exemplar-based sparse representationsfor noise robust automatic speech recognition. IEEE Transactions on Acoustics, Speech, andLanguage Processing , 19(7):2067–2080, 2011.Sahar Ghannay, Yannick Esteve, Nathalie Camelin, and Paul Deleglise. Evaluation of acoustic wordembeddings. In Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP , 2016.John J Godfrey, Edward C Holliman, and Jane McDaniel. SWITCHBOARD: Telephone speechcorpus for research and development. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 1992.Alex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur-rent neural networks. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2013.Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariantmapping. In IEEE Computer Society Conf. Computer Vision and Pattern Recognition , 2006.David Harwath and James Glass. Deep multimodal semantic embeddings for speech and images. InProc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) , 2015.David Harwath and James R Glass. Learning word-like units from joint audio-visual analysis. arXivpreprint arXiv:1701.07481 , 2017.10Published as a conference paper at ICLR 2017David Harwath, Antonio Torralba, and James Glass. Unsupervised learning of spoken language withvisual context. In Advances in Neural Information Processing Systems (NIPS) , 2016.Karl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without wordalignment. In Int. Conf. Learning Representations , 2014. arXiv:1312.6173 [cs.CL].Felix Hill, Roi Reichart, and Anna Korhonen. SimLex-999: Evaluating semantic models with (gen-uine) similarity estimation. Computational Linguistics , 41(4), 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architecturesfor matching natural language sentences. In Advances in Neural Information Processing Systems(NIPS) , 2014.Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum ́e III. Deep unordered com-position rivals syntactic methods for text classification. In Proc. Association for ComputationalLinguistics , 2015.Herman Kamper, Micah Elsner, Aren Jansen, and Sharon J. Goldwater. Unsupervised neural net-work based feature extraction using weak top-down constraints. In IEEE Int. Conf. Acoustics,Speech and Sig. Proc. , 2015.Herman Kamper, Weiran Wang, and Karen Livescu. Deep convolutional acoustic word embeddingsusing word-pair side information. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2016.Diederik Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. In Int. Conf.Learning Representations , 2015.Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor-ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information ProcessingSystems (NIPS) , 2015.Keith Levin, Katharine Henry, Aren Jansen, and Karen Livescu. Fixed-dimensional acoustic embed-dings of variable-length segments in low-resource settings. In Proc. IEEE Workshop on AutomaticSpeech Recognition and Understanding (ASRU) , 2013.Keith Levin, Aren Jansen, and Benjamin Van Durme. Segmental acoustic indexing for zero resourcekeyword search. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc. , 2015.Andrew L Maas, Stephen D Miller, Tyler M O’neil, Andrew Y Ng, and Patrick Nguyen. Word-levelacoustic modeling with convolutional vector regression. In Proc. ICML Workshop on Represen-tation Learning , 2012.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen-tations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013.Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling.InICML , 2007.Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. Multimodaldeep learning. In ICML , pp. 689–696, 2011.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for wordrepresentation. In Proc. Conference on Empirical Methods in Natural Language Processing ,2014.Shane Settle and Karen Livescu. Discriminative acoustic word embeddings: Recurrent neuralnetwork-based approaches. In Proc. IEEE Workshop on Spoken Language Technology (SLT) ,2016.11Published as a conference paper at ICLR 2017Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng.Grounded compositional semantics for finding and describing images with sentences. Trans-actions of the Association for Computational Linguistics , 2:207–218, 2014.Kihyuk Sohn, Wenling Shang, and Honglak Lee. Improved multimodal deep learning with variationof information. In Advances in Neural Information Processing Systems (NIPS) , pp. 2141–2149,2014.Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines.Journal of Machine Learing Research , pp. 2949–2980, 2014.Gabriel Synnaeve, Thomas Schatz, and Emmanuel Dupoux. Phonetics embedding learning withside information. In Proc. IEEE Workshop on Spoken Language Technology (SLT) , 2014.Laurens J. P. van der Maaten and Geoffrey E. Hinton. Visualizing data using t-SNE. Journal ofMachine Learing Research , 9:2579–2605, November 2008.Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images andlanguage. In Int. Conf. Learning Representations , 2016.Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representationlearning. In ICML , pp. 1083–1092, 2015.John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrasticsentence embeddings. In Int. Conf. Learning Representations , 2016.12Published as a conference paper at ICLR 2017A A DDITIONAL ANALYSISWe first explore the effect of network architectures for our embedding models. We learn embeddingsusing objective obj0and evaluate them on the acoustic and cross-view word discrimination tasks.The resulting average precisions on the development set are given in Table 4. All of the modelswere trained for 1000 epochs, except for the 1-layer unidirectional models which converged after500 epochs. It is clear that bidirectional LSTMs are more successful than unidirectional LSTMsfor these tasks, and two layers of LSTMs are much better than a single layer of LSTMs. We didnot observe significant further improvement by using more than two layers of LSTMs. For all otherexperiments, we fix the architecture to 2-layer bidirectional LSTMs for each view.Architecture Dev AP Dev AP(acoustic word discrimination) (cross-view word discrimination)1-layer unidirectional 0.379 0.6161-layer bidirectional 0.466 0.6902-layer bidirectional 0.659 0.791Table 4: Average precision (AP) for acoustic and cross-view word discrimination tasks on the de-velopment set, using embeddings learned with objective obj0and different LSTM architectures.0.0 0.2 0.4 0.6 0.8 1.0Recall0.00.20.40.60.81.0Precision−5 0 5 10 15 20Orthographic edit distances−0.20.00.20.40.60.81.01.21.4Embeddings cosine distancesFigure 5: Precision-recall curve (left: two-layer bidirectional LSTM trained with obj0+ obj2forword discrimination task) and scatter plot of embedding distances vs. orthographic distances (right:cost-sensitive margin model for word similarity task), for our best embedding models.In Figure 5 we also give the precision-recall curve for our best models, as well as the scatter plot ofcosine distances between acoustic embeddings vs. orthographic edit distances.13
S1CHraWNg
rJg_1L5gg
ICLR.cc/2017/conference/-/paper278/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "This paper presents a thorough analysis of different methods to do curriculum learning. The major issue I have with it is that the dataset used seems very specific and does not necessarily justified, as mentioned by AnonReviewer3. It would have been great to see experiments on more standard tasks. Also, I really can't understand how the performance of FFNN models can be so good, please elaborate on this (see last comment).\nHowever, the paper is well written, the comparisons of the described methods are interesting and would probably apply to some other datasets as well.\n\nThe paper is way too long (18 pages!). Please reduce it or move some of the results to an appendix section.\n\nThe method described is extremely similar to the one described in Reinforcement learning neural turing machines (Zaremba et al., 2016, https://arxiv.org/pdf/1505.00521v3.pdf) where the authors progressively increase the length of training examples until the performance exceeds a given threshold. Maybe you should mention it.\n\nCould you explain very briefly in the paper what \"4-connected\" and \"8-connected\" mean, for people not familiar with these terms?\n\nI agree that having gold pen stroke sequences would be nice and probably very good features to have for image classification. But how accurate are the constructed ones? Typically, the example given in figure 1 does not represent the way people write a \"3\". I'm just concerned about the validity of the proposed dataset and what these sequences really represent (although I agree that it can still be relevant as a sequence learning dataset, even if it does not reflect the way people write).\n\nIn figure 5, for the blue curve, I was expecting to see an increase of the error when new data are added to the set, but there doesn't seem to be much correlation between these two phenomenons. Can you explain why? Also, could you explain the important error rate increase at about 7e+07 steps for the regular sequence learning?\n\nThe method used to test the H1 hypothesis is interesting, but did you try something even simpler like not using batch (ie batch size of 1 sequence)? This would alleviate this \"different number of points by batch\" effect and the results would probably very different than in figure 5.\n\nThe performance of the FFNN models seem too good compared to the RNN ones. How is this possible? RNN models should perform at least as well. Even the \"Incremental sequence learning\" RNN barely beats its FFNN equivalent. Do the \"dx\" and \"dy\" values always take values in [-1, 0, 1]? If so, the number of possible mappings is very small (from [-1, 0, 1] to [-1, 0, 1]), how could a mapping between two successive points be so accurate without looking at the history? Please clarify on this.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Incremental Sequence Learning
["Edwin D. de Jong"]
Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=rJg_1L5gg
https://openreview.net/pdf?id=rJg_1L5gg
https://openreview.net/forum?id=rJg_1L5gg&noteId=S1CHraWNg
Under review as a conference paper at ICLR 2017INCREMENTAL SEQUENCE LEARNINGEdwin D. de JongDepartment of Information and Computing SciencesUtrecht Universityhttps://edwin-de-jong.github.io/ABSTRACTDeep learning research over the past years has shown that by increasing the scopeor difficulty of the learning problem over time, increasingly complex learningproblems can be addressed. We study incremental learning in the context ofsequence learning, using generative RNNs in the form of multi-layer recurrentMixture Density Networks. While the potential of incremental or curriculumlearning to enhance learning is known, indiscriminate application of the principledoes not necessarily lead to improvement, and it is essential therefore to knowwhich forms of incremental or curriculum learning have a positive effect. Thisresearch contributes to that aim by comparing three instantiations of incremental orcurriculum learning.We introduce Incremental Sequence Learning , a simple incremental approach tosequence learning.Incremental Sequence Learning starts out by using only the first few steps of eachsequence as training data. Each time a performance criterion has been reached, thelength of the parts of the sequences used for training is increased.To evaluate Incremental Sequence Learning and comparison methods, we introduceand make available a novel sequence learning task and data set: predicting andclassifying MNIST pen stroke sequences, where the familiar handwritten digitimages have been transformed to pen stroke sequences representing the skeletonsof the digits.We find that Incremental Sequence Learning greatly speeds up sequence learningand reaches the best test performance level of regular sequence learning 20 timesfaster, reduces the test error by 74%, and in general performs more robustly; itdisplays lower variance and achieves sustained progress after all three comparisonmethods have stopped improving. The two other instantiations of curriculumlearning do not result in any noticeable improvement. A trained sequence predictionmodel is also used in transfer learning to the task of sequence classification, whereit is found that transfer learning realizes improved classification performancecompared to methods that learn to classify from scratch.1 I NTRODUCTION1.1 I NCREMENTAL LEARNING , TRANSFER LEARNING ,AND REPRESENTATION LEARNINGDeep learning research over the past years has shown that by increasing the scope or difficulty of thelearning problem over time, increasingly complex learning problems can be addressed. This principlehas been described as Incremental learning by Elman (1991), and has a long history. Schlimmer andGranger (1986) described a pseudo-connectionist distributed concept learning approach involvingincremental learning. Elman (1991) defined Incremental Learning as an approach where the trainingdata is not presented all at once, but incrementally; see also Elman (1993). Giraud-Carrier (2000)defines Incremental Learning as follows: “A learning task is incremental if the training examples usedto solve it become available over time, usually one at a time.“ Bengio et al. (2009) introduced theframework of Curriculum Learning. The central idea behind this approach is that a learning system isguided by presenting gradually more and/or more complex concepts. A formal definition is providedspecifying that the distribution over examples converges monotonically towards the target training1Under review as a conference paper at ICLR 2017distribution, and that the entropy of the distributions visited over time, and hence the diversity oftraining examples, increases.An extension of the notion of incremental learning is to also let the learning task vary over time.This approach, known as Transfer Learning or Inductive Transfer, was first described by Pratt (1993).Thrun (1996) reported improved generalization performance for lifelong learning and describedrepresentation learning , whereas Caruana (1997) considered a Multitask learning setup wheretasks are learned in parallel while using a shared representation. In coevolutionary algorithms, thecoevolution of representations with solutions that employ them, see e.g. Moriarty (1997); de Jongand Oates (2002), provides another approach to representation learning. Representation learning canbe seen as a special form of transfer learning, where one goal is to learn adequate representations,and the other goal, addressed in parallel or sequentially, is to use these representations to address thelearning problem.Several of the recent successes of deep learning can be attributed to representation learning andincremental learning. Bengio et al. (2013) provide a review and insightful discussion of representationlearning. Parisotto et al. (2015) report experiments with transfer learning across Atari 2600 arcadegames where up to 5 million frames of training time in each game are saved. More recently, successfultransfer of robot learning from the virtual to the real world was achieved using transfer learning, seeRusu et al. (2016). And at the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC),the depth of networks has steadily increased over the years, so far leading up to a network of 152layers for the winning entry in the ILSVRC 2015 classification task; see He et al. (2015).1.2 S EQUENCE LEARNINGWe study incremental learning in the context of sequence learning . The aim in sequence learningis to predict, given a step of the sequence, what the next step will be. By iteratively feeding thepredicted output back into the network as the next input, the network can be used to produce acomplete sequences of variable length. For a discussion of variants of sequence learning problems,see Sun and Giles (2001); a more recent treatment covering recurrent neural networks as used here isprovided by Lipton (2015).An interesting challenge in sequence learning is that for most sequence learning problems of interest,the next step in a sequence does not follow unambiguously from the previous step. If this werethe case, i.e. if the underlying process generating the sequences satisfies the Markov property, thelearning problem would be reduced to learning a mapping from each step to the next. Instead, stepsin the sequence may depend on some or all of the preceding steps in the sequence. Therefore, a mainchallenge faced by a sequence learning model is to capture relevant information from the part ofthe sequence seen so far. This ability to capture relevant information about future sequences it mayreceive must be developed during training; the network must learn the ability to build up internalrepresentations which encode relevant aspects of the sequence that is received.1.3 I NCREMENTAL SEQUENCE LEARNINGThe dependency on the partial sequence received so far provides a special opportunity for incrementallearning that is specific to sequence learning. Whereas the examples in a supervised learning problembear no known relation to each other, the steps in a sequence have a very specific relation; later stepsin the sequence can only be learned well once the network has learned to develop the appropriateinternal state summarizing the part of the sequence seen so far. This observation leads to the idea thatsequence learning may be expedited by learning to predict the first few steps in each sequence firstand, once reasonable performance has been achieved and (hence) a suitable internal representation ofthe initial part of the sequences has been developed, gradually increasing the length of the partialsequences used for training.Aprefix of a sequence is a consecutive subsequence (a substring) of the sequence starting fromthe first element; e.g. the prefix S3of a sequence Sconsists of the first 3 steps of S. We defineIncremental Sequence Learning as an approach to sequence learning whereby learning starts out byusing only a short prefix of each sequence for training, and where the length of the prefixes used fortraining is gradually increased, up to the point where the complete sequences are used. The structureof sequence learning problems suggests that adequate modeling of the preceding part of the sequence2Under review as a conference paper at ICLR 2017is a requirement for learning later parts of the sequence; Incremental Sequence Learning draws theconsequence of this by learning to predict the earlier parts of the sequences first.1.4 R ELATED WORKIn presenting the framework of Curriculum Learning, Bengio et al. (2009) provide an examplewithin the domain of sequence learning, more specifically concerning language modeling. There, thevocabulary used for training on word sequences is gradually increased, i.e. the subset of sequencesused for training is gradually increased; this is analogous to one of the comparison methods usedhere. Another specialization of Curriculum Learning to the context of sequence learning describedby Bengio et al. (2015) addresses the discrepancy between training , where the true previous stepis presented as input, and inference , where the previous output from the network is used as input;with scheduled sampling , the probability of using the network output as input is adapted to graduallyincrease over time. Zaremba and Sutskever (2014) apply curriculum learning in a sequence-to-sequence learning context where a neural network learns to predict the outcome of Python programs.The generation of programs forming the training data is parameterized by two factors that control thecomplexity of the programs: the number of digits of the numbers used in the programs and the degreeof nesting. While a number of different instantiations of incremental or curriculum learning havebeen described in the context of sequence learning, no clear guidance is available on which forms areeffective. The particular form explored here of learning to predict the earlier parts of sequences firstis straightforward, it makes use of the particular structure of sequence learning problems, and it iseasy to implement; yet it has received very limited attention so far.2 MNIST H ANDWRITTEN DIGITS AS PENSTROKE SEQUENCES2.1 M OTIVATION FOR REPRESENTING DIGITS AS PEN STROKE SEQUENCESThe classification of MNIST digit images, see LeCun and Cortes (2010), is one example of a task onwhich the success of deep learning has been demonstrated convincingly; a test error rate of 0.23% wasobtained by Ciresan et al. (2012) using Multi-column Deep Neural Networks. To obtain a sequencelearning data set for evaluating Incremental Sequence Learning, we created a variant of the familiarMNIST handwritten digit data set provided by LeCun and Cortes (2010) where each digit image istransformed into a sequence of pen strokes that could have generated the digit.One motivation for representing digits as strokes is the notion that when humans try to discern digitsor letters that are difficult to read, it appears natural to trace the line so as to reconstruct what paththe author’s pen may have taken. Indeed, Hinton and Nair (2005) note that the idea that patterns canbe recognized by figuring out how they were generated was already introduced in the 1950’s, anddescribe a generative model for handwritten digits that uses two pairs of opposing springs whosestiffnesses are controlled by a motor program.Pen stroke sequences also form a natural and efficient representation for digits; handwriting constitutesa canonical manifestation of the manifold hypothesis, according to which “real-world data presentedin high dimensional spaces are expected to concentrate in the vicinity of a manifold Mof muchlower dimensionality dM, embedded in high dimensional input space Rdx”; see Bengio et al. (2013).Specifically: (i) the vast majority of the pixels are white, (ii) almost all digit images consist of a singleconnected set of pixels, and (iii) the shapes mostly consist of smooth curved lines. This suggests thatcollections of pen strokes form a natural representation for the purpose of recognizing digits.The relevance of the manifold hypothesis can also be appreciated by considering the space of all 2-D28x28 binary pixel images; when sampling uniformly from this space, one is likely to only encounterimages resembling TV noise, and the chances of observing any of the 70000 MNIST digit imagesis astronomically small. By contrast, a randomly generated pen stroke sequence is not unlikely toresemble a part of a digit, such as a short straight or curved line segment. This increased alignment ofthe digit data with its representation in the form of pen stroke sequences implies that the amount ofcomputation required to address the learning problem can potentially be vastly reduced.3Under review as a conference paper at ICLR 20172.2 C ONSTRUCTION OF THE PEN STROKE SEQUENCE DATA SETThe MNIST handwritten digit data set consists of 60000 training images and 10000 test images, eachforming 28 x 28 bit map images of written numerical digits from 0 to 9. The digits are transformedinto one or more pen strokes, each consisting of a sequence of pen offset pairs (dx;dy ). To extractthe pen stroke sequences, the following steps are performed:1.Incremental thesholding. Starting from the original MNIST grayscale image, the followingcharacteristics are measured:The number of nonzero pixelsThe number of connected components, for both the 4-connected and 8-connectedvariants.Starting from a thresholding level of zero, the thresholding level is increased stepwise,until either (A) the number of 4-connected or 8-connected components changes, (B) thenumber of remaining pixels drops below 50% of the original number, or (C) the thresholdinglevel reaches a preselected maximum level (250). When any of these conditions occur,the previous level (i.e. the highest thresholding level for which none of these conditionsoccurred) is selected.2. A common method for image thinning, described by Zhang and Suen (1984), is applied.3.After the thresholding and thinning steps, the result is a skeleton of the original digit imagethat mostly consists of single-pixel-width lines.4.Finding a pen stroke sequence that could have produced the digit skeleton can be viewedas a Traveling Salesman Problem where, starting from the origin, all points of the digitskeleton are visited. Each point is represented by the pen offset (dx;dy )from the previousto the current point. For any transition to a non-neighboring pixel (based on 8-connecteddistance), an extra step is inserted with ( dx,dy) = (0, 0) and with eos = 1 (end-of-stroke), toindicate that the current stroke has ended and the pen is to be lifted off the paper. At theend of each sequence, a final step with values (0, 0, 1, 1) is appended. The fourth valuerepresents eod, end-of-digit. This final tuple of the sequence marks that both the currentstroke and the current sequence have ended, and forms a signal that the next input presentedto the network will belong to another digit.Figure 1: The original image (top left), thresholded image, thinned image, and actual extracted penstroke image.4Under review as a conference paper at ICLR 2017(6, 4)(1, -1)(1, 0) (1, 0)(0, 1)(1, 1)(-1, 1)(1, 1)(1, 1)(0, 1)(0, 1)(-1, 1)(-1, 1)(-1, 0) (-1, 0)(-1, -1)Figure 2: Example of a pen stroke image.dxdy eos eod6 4 0 01 -1 0 01 0 0 01 0 0 01 1 0 00 1 0 0-1 1 0 01 1 0 01 1 0 00 1 0 00 1 0 0-1 1 0 0-1 1 0 0-1 0 0 0-1 0 0 0-1 -1 0 00 0 1 1Table 1: Corresponding sequence. The origin is atthe top left, and the positive vertical direction isdownward. From the origin to the first point, thefirst offset is 6 steps to the right and 4 down: (6,4). Then to the second point: 1 to the right and 1up, (1, -1); etc.It is important to note that the thinning operation discards pixels and therefore information; thisimplies that the sequence learning problem constructed here should be viewed as a new learningproblem, i.e. performance on this new task cannot be directly compared to results on the originalMNIST classification task. While for many images the thinned skeleton is an adequate representationthat retains original shape, in other cases relevant information is lost as part of the thinning process.Distribution of Sequence LengthsSequence lengthFrequency0 20 40 60 80 10005001500 2500Figure 3: Distribution of sequence lengths. The average sequence length is approximately 40 steps.3 N ETWORK ARCHITECTUREWe adopt the approach to generative neural networks described by Graves (2013) which makes use ofmixture density networks , introduced by Bishop (1994). One sequence corresponds to one completeimage of a digit skeleton, represented as a sequence of hdx;dy;eos;eodituples, and may containone or more strokes; see previous section.The network has four input units, corresponding to these four input variables. To produce the inputfor the network, the (dx;dy )pairs are scaled to yield two real-valued input variables dxanddy. The5Under review as a conference paper at ICLR 2017variables indicating the end-of-stroke (EOS) and end-of-digit (EOD) are binary inputs. Two hiddenLSTM layers, see Hochreiter and Schmidhuber (1997), of 200 units each are used.Figure 4: Network architecture; see text.The input units receive one step of a sequence at a time, starting with the first step. The goal for theoutput units is to predict the immediate next step in the sequence, but rather than trying to directlypredictdxanddy, the output units represent a mixture of bivariate Gaussians. The output layerconsists of the end of stroke signal (EOS), and a set of means i, standard deviations i, correlationsi, and mixture weights ifor each of theMmixture components, where the number of mixturecomponentsM= 17 was found empirically to yield good results and is used in the experimentspresented here. Additionally, a binary indicator signaling the end of digit (EOD) is used, to mark theend of each sequence. In addition to these output elements for predicting the pen stroke sequences,10 binary class variable outputs are added, representing the 10 digit classes. This facilitates switchingthe task from sequence prediction to sequence classification, as will be discussed later; the output ofthese units is ignored in the sequence prediction experiments. The number of output units depends onthe number of mixture components used, and equals 6M+ 2 + 10 = 114 .For regularization, we found in early experiments that using the maximum weight as a regularizationterm produced better results than using the more common L-2 regularization. This approach can beviewed as L-1-norm regularization, and has been used previously in the context of regularization,see e.g. Schmidt et al. (2008).The definition of the sequence prediction loss LPfollows Graves (2013), with the difference thatterms for the eod and for the L- 1loss are included:L(x) =TXt=1log0@XjjtN(xt+1jjt;jt;jt)1Alogeostif(xt+1)3= 1log (1eost)otherwiselogeodtif(xt+1)4= 1log (1eodt)otherwise+jjwjj14 I NCREMENTAL SEQUENCE LEARNING AND COMPARISON METHODSBelow we describe Incremental Sequence Learning and three comparison methods, where two ofthe comparison methods are other instantiations of curriculum learning, and the third comparison isregular sequence learning without a curriculum learning aspect.6Under review as a conference paper at ICLR 2017Regular sequence learningThe baseline method is regular sequence learning; here, all training data is used from theoutset.Incremental Sequence Learning: increasing sequence lengthPredicting the second step of a sequence given the first step is a straightforward mappingproblem that can be handled using regular supervised learning methods. The predictionof later steps in the sequence can potentially depend on all preceding steps, and for somecases may only be learned once an effective internal representation has been developedthat summarizes relevant information present in the preceding part of the sequence. Forpredicting the 17thstep for example, the available input consist of the previous 16 steps,and the network must learn to construct a compact representation of the preceding steps thathave been seen. More specifically, it must be able to distinguish between subspaces of thesequence space that correspond to different distributions for the next step in the sequence.The number of possible contexts grows exponentially with the position in the sequence, andthe task of summarizing the preceding sequence therefore potentially becomes more difficultas a function of the position within the sequence. The problem of learning to predict stepslater on in the sequence is therefore potentially much harder than learning to predict theearlier steps. In Incremental Sequence Learning therefore, the length of sequences presentedto the network is increased as learning progresses.Increasing training set sizeBengio et al. (2009) describe an application of curriculum learning to sequence learning,where the task is to predict the best word which can follow a given context of words in acorrect English sentence. The curriculum strategy used there is to grow the vocabulary size.Transferring this to the context of pen stroke sequence generation, the most straightforwardtranslation is to use subsets of the training data that grow in size, where the order of examplesthat are added to the training set is random.Increasing number of classesThe network is first presented with sequences from only one digit class; e.g. all zeros . Thenumber of classes is increased until all 10 digit classes are represented in the training data.All three curriculum learning methods employ a threshold criterion based on the training RMSE;once a specified level of the RMSE has been reached, the set of training examples (determined by thenumber of sequence steps used, the number of sequences used, or the number of digits) is increased.We note that many possible variants of this simple adaptive scheme are possible, some of which mayprovide improvements of the results.5 E XPERIMENTAL SETTINGSIn this section, we describe the experimental setup in detail.The configuration of the baseline method, regular sequence learning, is as follows. The number ofmixture components M= 17, two hidden layers of size 200 are used. A batch size of 50 sequencesper batch is used in these first experiments. The learning rate is = 0:0025 , with a decay rate of0.99995 per epoch. The order of training sequences (not steps within the sequences) is randomized.The weight of the regularization component = 0:25. In these first experiments, a subset of 10 000training sequences and 5 000 test sequences is used. The error measure in these figures is the RMSEof the pen offsets (unscaled) predicted by the network given the previous pen movement.The RMSE is calculated based on the difference between the predicted and actual (dx;dy )pairs,scaled back to their original range of pixel units, so as to obtain an interpretable error; the eosandeodcomponents of the error, which do form part of the loss, are not used in this error measure. For themethod where the sequence length is varied, the number of individual points (input-target pairs) thatmust be processed per sequence varies over the course of a run. The number of sequences processed(or collections thereof such as batches or epochs) is therefore no longer an adequate measure ofcomputational expense; performance is therefore reported as a function of the number of pointsprocessed.Details per method:7Under review as a conference paper at ICLR 2017Incremental Sequence LearningThe initial sequence length is 2, meaning that the first two points of each sequence are used,i.e. after feeding the first point as input, the second point is to be predicted. Once the trainingRMSE drops below the threshold value of 4, the length is doubled, up to the point where itreaches the maximum sequence length.Increasing training set sizeThe initial training set size is 10. Each time the RMSE threshold of 4 is reached, this amountis doubled, up to the point where the complete set of training sequences is used.Increasing number of digit classesThe initial number of classes is 1, meaning that only sequences representing the first digit(zero) are used. Each time the RMSE threshold of 4 is reached, this amount is doubled, upto the point where all 10 digit classes are used.6 E XPERIMENTAL RESULTS6.1 S EQUENCE PREDICTION : COMPARISON OF THE METHODSFigures 5 shows a comparison of the results of the four methods. The baseline method (in red) doesnot use curriculum learning, and is presented with the entire training set from the start. IncrementalSequence Learning (in green) performs markedly better than all comparison methods. It reaches thebest test performance of the baseline methods twenty times faster ; see the horizontal dotted black line.Moreover, Incremental Sequence Learning greatly improves generalization; on this subset of the data,the average test performance over 10 runs reaches 1.5 for Incremental Sequence Learning vs 3.9 forregular sequence learning, representing a reduction of the error of 74%.0e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 1: RNN, sequence−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 5: Comparison of the test error of the four methods, averaged over ten runs. The dotted linesindicate, at each point in time, which fraction of the training data has been made available at thatpoint for the method of the corresponding color.8Under review as a conference paper at ICLR 2017We furthermore note that the variance of the test error is substantially lower than for each of the othermethods, as seen in the performance graphs; and where the three comparison methods reach theirbest test error just before 4106processed sequence steps and then begin to deteriorate, the test errorfor incremental sequence learning continues to steadily decrease over the course of the run.Method Test set errorRegular sequence learning 7.82Incremental sequence learning 2.06Incremental number of classes 7.64Incremental number of sequences 6.27Table 2: Best value for the average over 10 runs of the test set error obtained by each of the methodsin Experiment 1. Incremental Sequence Learning achieves a reduction of 74% compared to regularsequence learning.The two other curriculum methods do not provide any speedup or advantage compared to the baselinemethod, and in fact result in a higher test error; indiscriminate application of the curriculum learningprinciple apparently does not guarantee improved results and it is important therefore to discoverwhich forms of curriculum learning can confer an advantage.To explain the dramatic improvement achieved by Incremental Sequence Learning, we consider twopossible hypotheses:H1: The number of sequences per batch is fixed (50), but the number of sequence steps or pointsvaries, and is initially much smaller (2) for Incremental Sequence Learning. Thus, when measuredin terms of the number of points that are being processed, the batch size for Incremental SequenceLearning is initially much smaller than for the remaining methods, and it increases adaptively overtime. HypothesisH1therefore is that (A) the smaller batch size improves performance, see Keskaret al. (2016) for earlier findings in this direction, and/or (B) the adaptive batch size aspect has apositive effect on performance.H2: Effectively learning later parts of the sequence requires an adequate internal representation ofthe preceding part of the sequence, which must be learned first; this formed the motivation for theIncremental Sequence Learning method.To test the first hypothesis, H1, we design a second experiment where the batch size is no longerdefined in terms of the number of sequences, but in terms of the number of points or sequence steps,where the number of points is chosen such that the expected total number of points for the baselinemethod remains the same. Thus, whereas a batch for regular sequence learning contains 50 sequencesof length 40 on average yielding 2000 points, Incremental Sequence Learning will start out withbatches containing 1000 sequences of 2 points each, yielding the same total number of points.Figure 6 shows the results. This change reduces the speedup during the earlier part of the runs, andthus partially explains the improvements observed with Incremental Sequence Learning. However,part of the speedup is still present, and moreover the three other observed improvements remain:Incremental Sequence Learning still features strongly improved generalization performanceIncremental Sequence Learning still has a much lower variance of the test errorIncremental Sequence Learning still continues improving at the point where the test perfor-mance of all other methods starts deterioratingIn summary, the adaptive and initially smaller batch size of Incremental Sequence Learning explainspart of the observed improvements, but not all. We therefore test to what extent hypothesis H2playsa role. To see whether the ability to first learn a suitable representation based on the earlier parts ofthe sequences plays a role, we compare the situation where this effect is ruled out. A straightforwardway to achieve this is to use Feed-Forward Neural Networks (FFNNs); whereas Recurrent NeuralNetworks (RNNs) are able to learn such a representation by learning to build up relevant internalstate, FFNNs lack this ability. Therefore if any advantage of Incremental Sequence Learning is seenwhen using FFNNs, it cannot be due to hypothesis H2. Conversely, if using FFNNs removes theadvantage, the advantage must have be due to the difference between FFNNs and RNNs, whichexactly corresponds to the ability to build up an informative internal representation, i.e. H2. Since9Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 2: RNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 6: Comparison of the test error of the four methods, averaged over ten runs.we want to explain the remaining part of the effect, we also use a batch size based on the number ofpoints, as in Experiment 2.Figure 7 shows the results. As the figure shows, when using FFNNs, the advantage of IncrementalSequence Learning is entirely lost. This provides a clear demonstration that both of the hypotheses H1andH2play a role. Together the two hypotheses explain the total effect of the difference, suggestingthat the proposed hypotheses are also the only explanatory factors that play a role.It is interesting to compare the performance of the RNN and their FFNN variants, by comparingthe results of Experiments 2 and 3. From this comparison, it is seen that for Incremental SequenceLearning, the RNN variant achieves improved performance compared to the FFNN variant, as wouldbe expected, since a FFNN cannot make use of any knowledge of the preceding part of the sequenceand is thus limited to learning a general mapping between two subsequent pen offsets pairs (dxk;dyk)and(dxk+1;dyk+1). However, it is the only method of the four to do so; for all three other methods,around the point where test performance for the RNN variants starts to deteriorate (after around4106processed sequence steps), FFNN performance continues to improve and surpasses that of theRNN variants. This suggests that Incremental Sequence Learning is the only method that is able toutilize information about the preceding part of the sequence, and thereby surpass FFNN performance.In terms of absolute performance, a strong further improvement can be obtained by using the entiretraining set, as will be seen in the next section. These results suggest that learning the earlier parts ofthe sequence first can be instrumental in sequence learning.6.2 L OSS AS A FUNCTION OF SEQUENCE POSITIONTo further analyze why variation of the sequence length has a particularly strong effect on sequencelearning, we evaluate how the relative difficulty of learning a sequence step relates to the positionwithin the sequence. To do so, we measure the average loss contribution of the points or steps withina sequence as a function of their position within the sequence, as obtained with a learning method that10Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 3: FFNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 7: Comparison of the test error of the four methods, averaged over ten runs.learns entire sequences (no incremental learning), averaged over the first hundred epochs of training.Figure 8 shows the results.0 10 20 30 40 50 60−50 −40 −30 −20 −10 0Loss vs. sequence positionPosition in sequenceLossFigure 8: The figure shows the average loss contribution of the points or steps within a sequenceas a function of their position within the sequence (see text). The first steps are fundamentallyunpredictable. Once some context has been received, the loss for the next steps steeply drops. Lateron in the sequence however, the loss increases strongly. This effect may be explained by the fact thatthe number of possible preceding contexts increases exponentially, thus posing stronger requirementson the learning system for steps later on in the sequence, and/or by the point that later parts of thesequences can only be learned adequately once earlier parts have been learned first, as later steps candepend on any of the earlier steps.11Under review as a conference paper at ICLR 2017The first steps are fundamentally unpredictable as the network cannot know which example it willreceive next; accordingly, at the start of the sequence, the error is high, as the method cannot knowin advance what the shape or digit class of the new sequence will be. Once the first steps of thesequence have been received and the context increasingly narrows down the possibilities, the loss forthe prediction of the next steps steeply drops. Subsequently however, as the position in the sequenceadvances, the loss increases strongly, and exceeds the initial uncertainty of the first steps. This effectmay be explained by the fact that the number of possible preceding contexts increases exponentially,thus posing stronger requirements on the learning system for steps later on in the sequence.6.3 R ESULTS ON THE FULL MNIST P ENSTROKE SEQUENCE DATA SETThe results reported so far were based on a subset of 10000 training sequences and 5000 testsequences, in order to complete a sufficient number of runs for each of the experiments within areasonable amount of time. Given the positive results obtained with Incremental Sequence Learning,we now train this method on the full MNIST Pen Stroke Sequence Data Set, consisting of 60000training sequences and 10000 test sequences (Experiment 4). In these experiments, a batch size of500 sequences instead of 50 is used.Figure 9 shows the results. Compared to the performance of the above experiments, a strongimprovement is obtained by training on this larger set of examples; whereas the best test error inthe results above was slightly above 1.5, the test performance for this experiment drops below one;a test error of 0.972 on the full test data set is obtained. An strking finding is that while initiallythe test error is much larger than the train error, the test error continues to improve for a long time,and approaches the training error very closely; in other words, no overtraining is observed even forrelatively long runs where the training performance appears to be nearly converged.6.4 T RANSFER LEARNINGThe first task considered here was to perform sequence learning: predicting step t+1 of a sequencegiven step t. To adequately perform this task, the network must learn to detect which digit it is beingfed; the initial part of a sequence representing a 2 or 3 for example is very similar, but as evidence isgrowing that the current sequence represents a 3, that information is vital in predicting how the strokewill continue.Given that the network is expected to have built up some representation of what digit it is reading, aninteresting test is to see whether it is able to switch to the task of sequence classification . The inputpresentation remains the same: at every time step, the recurrent neural network is fed one step ofthe sequence of pen movements representing the strokes of a digit. However, we now also read theoutput of the 10 binary class variable outputs. The target for these is a one-hot representation of thedigit, i.e. the target value for the output corresponding to the digit is one, and all nine other targetvalues are zero. To obtain the output, softmax is used, and the sequence classification loss LCfor theclassification outputs is the cross entropy, weighted by a factor = 10 :LC= 1NNXn=1[ynlog^yn+ (1yn)log(1^yn)]!In the following experiments, the loss consists of the sequence classification loss LC, to whichoptionally the earlier sequence prediction loss LPis added, regulated by a binary parameter :L=LC+LPThe network is asked for a prediction of the digit class after each step it receives. Clearly, accurateclassification is impossible during the first part of a sequence; before the first point is received, thesequence could represent any of the 10 digits with equal probability. As the sequence is received stepby step however, the network receives more information. The prediction produced after receiving theone-but-last step of the sequence, i.e. at the point where the network was previously asked to predictthe last step, is used as its final answer for predicting the digit class.We compare the following variants:12Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 50 100 150Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08 2.5e+08 3.0e+081 2 3 4 5 6 7Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error1 2 3 4 5 6 7Figure 9: Performance on full MNIST Pen Stroke Sequence Data Set, zoomed to first part of the runand same experiment, results for the full run.13Under review as a conference paper at ICLR 2017Transfer learning: sequence classification and sequence predictionStarting from a trained sequence prediction model as obtained in Experiment 4, the earlierloss function is augmented with the sequence classification loss: L=LC+LPTransfer Learning: sequence classification onlyStarting from a trained sequence prediction model, the loss function is switched such that itonly reflects the classification performance, and no longer tracks the sequence predictionperformance:L=LCLearning from scratch, sequence classification and sequence predictionIn this variant, learning starts from scratch, and both classification loss and prediction lossare used, as in the first experiment: L=LC+LPLearning from scratch, sequence classification onlyL=LC0e+00 2e+07 4e+07 6e+07 8e+070.00.20.40.60.81.0Experiment 5: Transfer learningfrom sequence prediction to sequence classificationNumber of sequence steps processedFraction of correct predictionsTransfer Learning, classification onlyTransfer learning, classification and predictionLearn from scratch, classification onlyLearn from scratch, classification and prediction0.150.350.550.750.95Figure 10: Using the sequence prediction model as a starting point for sequence classification: startingfrom a trained sequence prediction network, the task is switched to predicting the class of the digit(red and black lines). A comparison with learning a digit classification model from scratch (blue andgreen lines) shows that the internal state built up to predict sequence steps is helpful in predicting theclass of the digit represented by the sequence.Figure 10 shows the results; indeed the network is able to build further on its ability to predict penstroke sequences, and learns the sequence classification task faster and more accurately than anidentical network that learns the sequence classification task from scratch; in this first and straight-forward transfer learning experiment based on the MNIST stroke sequence data set, a classificationaccuracy of 96.0% is reached1. We note that performance on the MNIST sequence data cannot becompared to results obtained with the original MNIST data set, as the information in the input data isvastly reduced. This result sets a first baseline for the MNIST stroke sequence data set; we expectthere is ample room for improvement. Simultaneously learning sequence prediction and sequenceclassification does not appear to provide an advantage, neither for transfer learning nor for learningfrom scratch.1This performance was reached after training for 7107sequence steps, i.e. roughly twice as long as the runshown in the chart14Under review as a conference paper at ICLR 20177 G ENERATIVE RESULTSTo gain insight into what the network has learned, in this section we report examples of output of thenetwork.7.1 D EVELOPMENT DURING TRAININGDuring training, the network receives each sequence step by step, and after each step, it outputs itsexpectation of the offset of the next point. In these figures and movies, we visualize the predictionsof the network for a given sequence at different stages of the training process. All results have beenobtained from a single run of Incremental Sequence Learning.After 80 batches After 140 batches After 530 batches After 570 batches After 650 batchesFigure 11: Movie showing what the network has learned over time. The movie shows the output forthree sequences of the test data at different stages during training. To view, click the image or visitthis link: https://edwin-de-jong.github.io/blog/isl/rnn-movies/generative-rnn-training-movie.gif .7.2 U NGUIDED OUTPUT GENERATION ,A.K.A.NEURAL NETWORK HALLUCINATIONAfter training, the trained network can be used to generate output independently. The guidance that ispresent during training in the form of receiving each next step of the sequence following a predictionis not available here. Instead, the output produced by the network is fed back into the network as itsnext input, see Figures 12 and 13. Figure 14 shows example results.Figure 12: Training: the target of a trainingstep is used as the next input.Figure 13: Generation: the output of the net-work is used as the next input.15Under review as a conference paper at ICLR 2017Output resembling a 2 Output resembling a 3 Output resembling a 4Figure 14: Unguided output of the network: after each step, the network’s output is fed back as thenext input. Clearly, the network has learned the ability to independently produce long sequencesrepresenting different digits that occurred in training data.7.3 S EQUENCE CLASSIFICATIONThe third analysis of the behavior the trained network is to view what happens during sequenceclassification. At each step of the sequence, we monitor the ten class outputs and visualize theiroutput. As more steps of the sequence are being received, the network receives more information,and adjusts its expectation of what digit class the sequence represents.●●MNIST stroke sequence test image 25●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 0. Ini-tially, as the downward part ofthe curved stroke is being re-ceived, the network believesthe sequences represents a 4.After passing the lowest pointof the figure, it assigns higherlikelihood to a 6. Only at thevery end, just in time beforethe sequence ends, the predic-tion of the network switchesfor the last time, and a highprobability is assigned to thecorrect class.●●MNIST stroke sequence test image 18●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●10203040500 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 3. Ini-tially, the networks estimatesthe sequence to represent a 7.Next, it expects a 2 is morelikely. After 20 points havebeen received, it concludes(correctly) that the sequencesrepresents a 3.●●MNIST stroke sequence test image 62●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for asequence representing a 9.While receiving the sequence,the dominant prediction of thenetwork is that the sequencerepresents a five; the openloop of the 9 and the straighttop line may contribute to this.When the last points are re-ceived, the network considera 9 to be more likely, but someambiguity remains.8 C ONCLUSIONSThere are many possible ways to apply the principles of incremental or curriculum learning tosequence learning, but so far a general understanding of which forms of curriculum sequence learninghave a positive effect is missing. We have investigated a particular approach to sequence learningwhere the training data is initially limited to the first few steps of each sequence. Gradually, as the16Under review as a conference paper at ICLR 2017network learns to predict the early parts of the sequences, the length of the part of the sequences usedfor training is increased. We name this approach Incremental Sequence Learning, and find that itstrongly improves sequence learning performance. Two other forms of curriculum sequence learningused for comparison did not display improvements compared to regular sequence learning. Theorigins of this performance improvement are analyzed in comparison experiments, as detailed below.A first observation was that with Incremental Sequence Learning, the time required to attain the besttest performance level of regular sequence learning was much lower; on average, the method reachedthis level twenty times faster, thus achieving a significant speedup and reduction of the computationalcost of sequence learning. More importantly, Incremental Sequence Learning was found to reducethe test error of regular sequence learning by 74%.To analyze the cause of the observed speedup and performance improvements, we first increasethe number of sequences per batch for Incremental Sequence Learning, so that all methods use thesame number of sequence steps per batch. This reduced the speedup, but the improvement of thegeneralization performance was maintained. We then replaced the RNN layers with feed forwardnetwork layers, so that the networks can no longer maintain information about the earlier part ofthe sequences. This completely removed the remaining advantage. This provides clear evidencethat the improvement in generalization performance is due to the specific ability of an RNN tobuild up internal representations of the sequences it receives, and that the ability to develop theserepresentations is aided by training on the early parts of sequences first.Next, we trained Incremental Sequence Learning on the full MNIST stroke sequence data set, andfound that the use of this larger training set further improves sequence prediction performance. Thetrained model was then used as a starting point for transfer learning, where the task was switchedfrom sequence prediction to sequence classification .We conclude that Incremental Sequence Learning provides a simple and easily applicable approachto sequence learning that was found to produce large improvements in both computation time andgeneralization performance. The dependency of later steps in a sequence on the preceding steps ischaracteristic of virtually all sequence learning problems. We therefore expect that this approach canyield improvements for sequence learning applications in general, and recommend its usage, giventhat exclusively positive results were obtained with the approach so far.9 R ESOURCESThe Tensorflow implementation that was used to perform these experiments is available here: https://github.com/edwin-de-jong/incremental-sequence-learningThe MNIST stroke sequence data set is available for download here: https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/MNIST-digits-stroke-sequence-dataThe code for transforming the MNIST digit data set to a pen strokesequence data set has also been made available: https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/wiki/MNIST-digits-as-stroke-sequences-(code)ACKNOWLEDGMENTSThe author would like to thank Max Welling, Dick de Ridder and Michiel de Jong for valuablecomments and suggestions on earlier versions.REFERENCESBengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. (2015). Scheduled sampling for sequenceprediction with recurrent neural networks. In Proceedings of the 28th International Conference onNeural Information Processing Systems , NIPS’15, pages 1171–1179, Cambridge, MA, USA. MITPress.17Under review as a conference paper at ICLR 2017Bengio, Y ., Courville, A., and Vincent, P. (2013). Representation learning: A review and newperspectives. IEEE Trans. Pattern Anal. Mach. Intell. , 35(8):1798–1828.Bengio, Y ., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedingsof the 26th Annual International Conference on Machine Learning , ICML ’09, pages 41–48, NewYork, NY , USA. ACM.Bishop, C. (1994). Mixture density networks. Technical Report NCRG/94/0041, Aston University.Caruana, R. (1997). Multitask learning. Mach. Learn. , 28(1):41–75.Ciresan, D. C., Meier, U., and Schmidhuber, J. (2012). Multi-column deep neural networks for imageclassification. CoRR , abs/1202.2745.de Jong, E. D. and Oates, T. (2002). A coevolutionary approach to representation development.Proceedings of the ICML-2002 Workshop on Development of Representations , pages 1–6.Elman, J. L. (1991). Incremental learning, or the importance of starting small. crl technical report9101. Technical report, University of California, San Diego.Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small.Cognition , 48:781–99.Giraud-Carrier, C. (2000). A note on the utility of incremental learning. AI Commun. , 13(4):215–223.Graves, A. (2013). Generating sequences with recurrent neural networks. CoRR , abs/1308.0850.He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. CoRR ,abs/1512.03385.Hinton, G. E. and Nair, V . (2005). Inferring motor programs from images of handwritten digits. InAdvances in Neural Information Processing Systems 18 [Neural Information Processing Systems,NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada] , pages 515–522.Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation , 9(8):1735–1780.Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batchtraining for deep learning: Generalization gap and sharp minima. CoRR , abs/1609.04836.LeCun, Y . and Cortes, C. (2010). MNIST handwritten digit database.Lipton, Z. C. (2015). A critical review of recurrent neural networks for sequence learning. CoRR ,abs/1506.00019.Moriarty, D. E. (1997). Symbiotic Evolution Of Neural Networks In Sequential Decision Tasks . PhDthesis, Department of Computer Sciences, The University of Texas at Austin. Technical ReportUT-AI97-257.Parisotto, E., Ba, L. J., and Salakhutdinov, R. (2015). Actor-mimic: Deep multitask and transferreinforcement learning. CoRR , abs/1511.06342.Pratt, L. Y . (1993). Discriminability-based transfer between neural networks. In Advances in NeuralInformation Processing Systems 5, [NIPS Conference] , pages 204–211, San Francisco, CA, USA.Morgan Kaufmann Publishers Inc.Rusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., and Hadsell, R. (2016). Sim-to-realrobot learning from pixels with progressive nets. arxiv:1610.04286 [cs.ro]. Technical report, DeepMind.Schlimmer, J. C. and Granger, R. H. (1986). Incremental learning from noisy data. Machine Learning ,1(3):317–354.Schmidt, M., Murphy, K., Fung, G., and Rosales, R. (2008). Structure learning in random fields forheart motion abnormality detection. In In CVPR .18Under review as a conference paper at ICLR 2017Sun, R. and Giles, C. L. (2001). Sequence learning: from recognition and prediction to sequentialdecision making. IEEE Intelligent Systems , 16(4):67–70.Thrun, S. (1996). Is learning the n-th thing any easier than learning the first. In Advances in NeuralInformation Processing Systems , volume 8, pages 640–646.Zaremba, W. and Sutskever, I. (2014). Learning to execute. CoRR , abs/1410.4615.Zhang, T. Y . and Suen, C. Y . (1984). A fast parallel algorithm for thinning digital patterns. Commun.ACM , 27(3):236–239.19
rJDEnJM4g
rJg_1L5gg
ICLR.cc/2017/conference/-/paper278/official/review
{"title": "Interesting idea, long experiments, but only a single non-standard dataset", "rating": "5: Marginally below acceptance threshold", "review": "The submitted paper proposes a new way of learning sequence predictors. In the lines of incremental learning and curriculum learning, easier samples are presented first and the complexity is increased during training. The particularity here is that the complexity is defined as the length of the sequences given for training, the premise being is that longer sequences are harder to learn, since they need a more complex internal representation.\n\nThe targeted application is sequence prediction from primed prefixes, tested on a single dataset, which the authors extract themselves from MNIST.\n\nThe idea in the paper is interesting and worth reading. There are also many interesting aspects of evaluation part, as the authors perform several ablation studies to rule out side-effects of the tests. The proposed learning strategy is compared to other strategies.\n\nHowever, my biggest concern is still with evaluation. The authors tested the method on a single dataset, which is non standard and derived from MNIST. Given the general nature of the claim, in order to confirm the interest of the proposed algorithm, it need to be tested on other datasets, public datasets, and on a different application.\n\nThe paper is too long and should be trimmed significantly.\n\nThe transfer learning part (from prediction to classification) is a different story and I do not see a clear connection to the main contribution of the paper.\n\nThe presentation and organization of the paper could be improved. It is quite sequentially written and sometimes reads like a student's report.\n\nThe loss given in the long unnumbered equation on page 6 should be better explained: provide explanations for each term, and make clearer what the different symbols mean. Learning is supervised, so which variables are predictions, and which are observations from the data (ground truth).\n\nNames in table 2 do not correspond to the descriptions in section 4.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Incremental Sequence Learning
["Edwin D. de Jong"]
Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=rJg_1L5gg
https://openreview.net/pdf?id=rJg_1L5gg
https://openreview.net/forum?id=rJg_1L5gg&noteId=rJDEnJM4g
Under review as a conference paper at ICLR 2017INCREMENTAL SEQUENCE LEARNINGEdwin D. de JongDepartment of Information and Computing SciencesUtrecht Universityhttps://edwin-de-jong.github.io/ABSTRACTDeep learning research over the past years has shown that by increasing the scopeor difficulty of the learning problem over time, increasingly complex learningproblems can be addressed. We study incremental learning in the context ofsequence learning, using generative RNNs in the form of multi-layer recurrentMixture Density Networks. While the potential of incremental or curriculumlearning to enhance learning is known, indiscriminate application of the principledoes not necessarily lead to improvement, and it is essential therefore to knowwhich forms of incremental or curriculum learning have a positive effect. Thisresearch contributes to that aim by comparing three instantiations of incremental orcurriculum learning.We introduce Incremental Sequence Learning , a simple incremental approach tosequence learning.Incremental Sequence Learning starts out by using only the first few steps of eachsequence as training data. Each time a performance criterion has been reached, thelength of the parts of the sequences used for training is increased.To evaluate Incremental Sequence Learning and comparison methods, we introduceand make available a novel sequence learning task and data set: predicting andclassifying MNIST pen stroke sequences, where the familiar handwritten digitimages have been transformed to pen stroke sequences representing the skeletonsof the digits.We find that Incremental Sequence Learning greatly speeds up sequence learningand reaches the best test performance level of regular sequence learning 20 timesfaster, reduces the test error by 74%, and in general performs more robustly; itdisplays lower variance and achieves sustained progress after all three comparisonmethods have stopped improving. The two other instantiations of curriculumlearning do not result in any noticeable improvement. A trained sequence predictionmodel is also used in transfer learning to the task of sequence classification, whereit is found that transfer learning realizes improved classification performancecompared to methods that learn to classify from scratch.1 I NTRODUCTION1.1 I NCREMENTAL LEARNING , TRANSFER LEARNING ,AND REPRESENTATION LEARNINGDeep learning research over the past years has shown that by increasing the scope or difficulty of thelearning problem over time, increasingly complex learning problems can be addressed. This principlehas been described as Incremental learning by Elman (1991), and has a long history. Schlimmer andGranger (1986) described a pseudo-connectionist distributed concept learning approach involvingincremental learning. Elman (1991) defined Incremental Learning as an approach where the trainingdata is not presented all at once, but incrementally; see also Elman (1993). Giraud-Carrier (2000)defines Incremental Learning as follows: “A learning task is incremental if the training examples usedto solve it become available over time, usually one at a time.“ Bengio et al. (2009) introduced theframework of Curriculum Learning. The central idea behind this approach is that a learning system isguided by presenting gradually more and/or more complex concepts. A formal definition is providedspecifying that the distribution over examples converges monotonically towards the target training1Under review as a conference paper at ICLR 2017distribution, and that the entropy of the distributions visited over time, and hence the diversity oftraining examples, increases.An extension of the notion of incremental learning is to also let the learning task vary over time.This approach, known as Transfer Learning or Inductive Transfer, was first described by Pratt (1993).Thrun (1996) reported improved generalization performance for lifelong learning and describedrepresentation learning , whereas Caruana (1997) considered a Multitask learning setup wheretasks are learned in parallel while using a shared representation. In coevolutionary algorithms, thecoevolution of representations with solutions that employ them, see e.g. Moriarty (1997); de Jongand Oates (2002), provides another approach to representation learning. Representation learning canbe seen as a special form of transfer learning, where one goal is to learn adequate representations,and the other goal, addressed in parallel or sequentially, is to use these representations to address thelearning problem.Several of the recent successes of deep learning can be attributed to representation learning andincremental learning. Bengio et al. (2013) provide a review and insightful discussion of representationlearning. Parisotto et al. (2015) report experiments with transfer learning across Atari 2600 arcadegames where up to 5 million frames of training time in each game are saved. More recently, successfultransfer of robot learning from the virtual to the real world was achieved using transfer learning, seeRusu et al. (2016). And at the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC),the depth of networks has steadily increased over the years, so far leading up to a network of 152layers for the winning entry in the ILSVRC 2015 classification task; see He et al. (2015).1.2 S EQUENCE LEARNINGWe study incremental learning in the context of sequence learning . The aim in sequence learningis to predict, given a step of the sequence, what the next step will be. By iteratively feeding thepredicted output back into the network as the next input, the network can be used to produce acomplete sequences of variable length. For a discussion of variants of sequence learning problems,see Sun and Giles (2001); a more recent treatment covering recurrent neural networks as used here isprovided by Lipton (2015).An interesting challenge in sequence learning is that for most sequence learning problems of interest,the next step in a sequence does not follow unambiguously from the previous step. If this werethe case, i.e. if the underlying process generating the sequences satisfies the Markov property, thelearning problem would be reduced to learning a mapping from each step to the next. Instead, stepsin the sequence may depend on some or all of the preceding steps in the sequence. Therefore, a mainchallenge faced by a sequence learning model is to capture relevant information from the part ofthe sequence seen so far. This ability to capture relevant information about future sequences it mayreceive must be developed during training; the network must learn the ability to build up internalrepresentations which encode relevant aspects of the sequence that is received.1.3 I NCREMENTAL SEQUENCE LEARNINGThe dependency on the partial sequence received so far provides a special opportunity for incrementallearning that is specific to sequence learning. Whereas the examples in a supervised learning problembear no known relation to each other, the steps in a sequence have a very specific relation; later stepsin the sequence can only be learned well once the network has learned to develop the appropriateinternal state summarizing the part of the sequence seen so far. This observation leads to the idea thatsequence learning may be expedited by learning to predict the first few steps in each sequence firstand, once reasonable performance has been achieved and (hence) a suitable internal representation ofthe initial part of the sequences has been developed, gradually increasing the length of the partialsequences used for training.Aprefix of a sequence is a consecutive subsequence (a substring) of the sequence starting fromthe first element; e.g. the prefix S3of a sequence Sconsists of the first 3 steps of S. We defineIncremental Sequence Learning as an approach to sequence learning whereby learning starts out byusing only a short prefix of each sequence for training, and where the length of the prefixes used fortraining is gradually increased, up to the point where the complete sequences are used. The structureof sequence learning problems suggests that adequate modeling of the preceding part of the sequence2Under review as a conference paper at ICLR 2017is a requirement for learning later parts of the sequence; Incremental Sequence Learning draws theconsequence of this by learning to predict the earlier parts of the sequences first.1.4 R ELATED WORKIn presenting the framework of Curriculum Learning, Bengio et al. (2009) provide an examplewithin the domain of sequence learning, more specifically concerning language modeling. There, thevocabulary used for training on word sequences is gradually increased, i.e. the subset of sequencesused for training is gradually increased; this is analogous to one of the comparison methods usedhere. Another specialization of Curriculum Learning to the context of sequence learning describedby Bengio et al. (2015) addresses the discrepancy between training , where the true previous stepis presented as input, and inference , where the previous output from the network is used as input;with scheduled sampling , the probability of using the network output as input is adapted to graduallyincrease over time. Zaremba and Sutskever (2014) apply curriculum learning in a sequence-to-sequence learning context where a neural network learns to predict the outcome of Python programs.The generation of programs forming the training data is parameterized by two factors that control thecomplexity of the programs: the number of digits of the numbers used in the programs and the degreeof nesting. While a number of different instantiations of incremental or curriculum learning havebeen described in the context of sequence learning, no clear guidance is available on which forms areeffective. The particular form explored here of learning to predict the earlier parts of sequences firstis straightforward, it makes use of the particular structure of sequence learning problems, and it iseasy to implement; yet it has received very limited attention so far.2 MNIST H ANDWRITTEN DIGITS AS PENSTROKE SEQUENCES2.1 M OTIVATION FOR REPRESENTING DIGITS AS PEN STROKE SEQUENCESThe classification of MNIST digit images, see LeCun and Cortes (2010), is one example of a task onwhich the success of deep learning has been demonstrated convincingly; a test error rate of 0.23% wasobtained by Ciresan et al. (2012) using Multi-column Deep Neural Networks. To obtain a sequencelearning data set for evaluating Incremental Sequence Learning, we created a variant of the familiarMNIST handwritten digit data set provided by LeCun and Cortes (2010) where each digit image istransformed into a sequence of pen strokes that could have generated the digit.One motivation for representing digits as strokes is the notion that when humans try to discern digitsor letters that are difficult to read, it appears natural to trace the line so as to reconstruct what paththe author’s pen may have taken. Indeed, Hinton and Nair (2005) note that the idea that patterns canbe recognized by figuring out how they were generated was already introduced in the 1950’s, anddescribe a generative model for handwritten digits that uses two pairs of opposing springs whosestiffnesses are controlled by a motor program.Pen stroke sequences also form a natural and efficient representation for digits; handwriting constitutesa canonical manifestation of the manifold hypothesis, according to which “real-world data presentedin high dimensional spaces are expected to concentrate in the vicinity of a manifold Mof muchlower dimensionality dM, embedded in high dimensional input space Rdx”; see Bengio et al. (2013).Specifically: (i) the vast majority of the pixels are white, (ii) almost all digit images consist of a singleconnected set of pixels, and (iii) the shapes mostly consist of smooth curved lines. This suggests thatcollections of pen strokes form a natural representation for the purpose of recognizing digits.The relevance of the manifold hypothesis can also be appreciated by considering the space of all 2-D28x28 binary pixel images; when sampling uniformly from this space, one is likely to only encounterimages resembling TV noise, and the chances of observing any of the 70000 MNIST digit imagesis astronomically small. By contrast, a randomly generated pen stroke sequence is not unlikely toresemble a part of a digit, such as a short straight or curved line segment. This increased alignment ofthe digit data with its representation in the form of pen stroke sequences implies that the amount ofcomputation required to address the learning problem can potentially be vastly reduced.3Under review as a conference paper at ICLR 20172.2 C ONSTRUCTION OF THE PEN STROKE SEQUENCE DATA SETThe MNIST handwritten digit data set consists of 60000 training images and 10000 test images, eachforming 28 x 28 bit map images of written numerical digits from 0 to 9. The digits are transformedinto one or more pen strokes, each consisting of a sequence of pen offset pairs (dx;dy ). To extractthe pen stroke sequences, the following steps are performed:1.Incremental thesholding. Starting from the original MNIST grayscale image, the followingcharacteristics are measured:The number of nonzero pixelsThe number of connected components, for both the 4-connected and 8-connectedvariants.Starting from a thresholding level of zero, the thresholding level is increased stepwise,until either (A) the number of 4-connected or 8-connected components changes, (B) thenumber of remaining pixels drops below 50% of the original number, or (C) the thresholdinglevel reaches a preselected maximum level (250). When any of these conditions occur,the previous level (i.e. the highest thresholding level for which none of these conditionsoccurred) is selected.2. A common method for image thinning, described by Zhang and Suen (1984), is applied.3.After the thresholding and thinning steps, the result is a skeleton of the original digit imagethat mostly consists of single-pixel-width lines.4.Finding a pen stroke sequence that could have produced the digit skeleton can be viewedas a Traveling Salesman Problem where, starting from the origin, all points of the digitskeleton are visited. Each point is represented by the pen offset (dx;dy )from the previousto the current point. For any transition to a non-neighboring pixel (based on 8-connecteddistance), an extra step is inserted with ( dx,dy) = (0, 0) and with eos = 1 (end-of-stroke), toindicate that the current stroke has ended and the pen is to be lifted off the paper. At theend of each sequence, a final step with values (0, 0, 1, 1) is appended. The fourth valuerepresents eod, end-of-digit. This final tuple of the sequence marks that both the currentstroke and the current sequence have ended, and forms a signal that the next input presentedto the network will belong to another digit.Figure 1: The original image (top left), thresholded image, thinned image, and actual extracted penstroke image.4Under review as a conference paper at ICLR 2017(6, 4)(1, -1)(1, 0) (1, 0)(0, 1)(1, 1)(-1, 1)(1, 1)(1, 1)(0, 1)(0, 1)(-1, 1)(-1, 1)(-1, 0) (-1, 0)(-1, -1)Figure 2: Example of a pen stroke image.dxdy eos eod6 4 0 01 -1 0 01 0 0 01 0 0 01 1 0 00 1 0 0-1 1 0 01 1 0 01 1 0 00 1 0 00 1 0 0-1 1 0 0-1 1 0 0-1 0 0 0-1 0 0 0-1 -1 0 00 0 1 1Table 1: Corresponding sequence. The origin is atthe top left, and the positive vertical direction isdownward. From the origin to the first point, thefirst offset is 6 steps to the right and 4 down: (6,4). Then to the second point: 1 to the right and 1up, (1, -1); etc.It is important to note that the thinning operation discards pixels and therefore information; thisimplies that the sequence learning problem constructed here should be viewed as a new learningproblem, i.e. performance on this new task cannot be directly compared to results on the originalMNIST classification task. While for many images the thinned skeleton is an adequate representationthat retains original shape, in other cases relevant information is lost as part of the thinning process.Distribution of Sequence LengthsSequence lengthFrequency0 20 40 60 80 10005001500 2500Figure 3: Distribution of sequence lengths. The average sequence length is approximately 40 steps.3 N ETWORK ARCHITECTUREWe adopt the approach to generative neural networks described by Graves (2013) which makes use ofmixture density networks , introduced by Bishop (1994). One sequence corresponds to one completeimage of a digit skeleton, represented as a sequence of hdx;dy;eos;eodituples, and may containone or more strokes; see previous section.The network has four input units, corresponding to these four input variables. To produce the inputfor the network, the (dx;dy )pairs are scaled to yield two real-valued input variables dxanddy. The5Under review as a conference paper at ICLR 2017variables indicating the end-of-stroke (EOS) and end-of-digit (EOD) are binary inputs. Two hiddenLSTM layers, see Hochreiter and Schmidhuber (1997), of 200 units each are used.Figure 4: Network architecture; see text.The input units receive one step of a sequence at a time, starting with the first step. The goal for theoutput units is to predict the immediate next step in the sequence, but rather than trying to directlypredictdxanddy, the output units represent a mixture of bivariate Gaussians. The output layerconsists of the end of stroke signal (EOS), and a set of means i, standard deviations i, correlationsi, and mixture weights ifor each of theMmixture components, where the number of mixturecomponentsM= 17 was found empirically to yield good results and is used in the experimentspresented here. Additionally, a binary indicator signaling the end of digit (EOD) is used, to mark theend of each sequence. In addition to these output elements for predicting the pen stroke sequences,10 binary class variable outputs are added, representing the 10 digit classes. This facilitates switchingthe task from sequence prediction to sequence classification, as will be discussed later; the output ofthese units is ignored in the sequence prediction experiments. The number of output units depends onthe number of mixture components used, and equals 6M+ 2 + 10 = 114 .For regularization, we found in early experiments that using the maximum weight as a regularizationterm produced better results than using the more common L-2 regularization. This approach can beviewed as L-1-norm regularization, and has been used previously in the context of regularization,see e.g. Schmidt et al. (2008).The definition of the sequence prediction loss LPfollows Graves (2013), with the difference thatterms for the eod and for the L- 1loss are included:L(x) =TXt=1log0@XjjtN(xt+1jjt;jt;jt)1Alogeostif(xt+1)3= 1log (1eost)otherwiselogeodtif(xt+1)4= 1log (1eodt)otherwise+jjwjj14 I NCREMENTAL SEQUENCE LEARNING AND COMPARISON METHODSBelow we describe Incremental Sequence Learning and three comparison methods, where two ofthe comparison methods are other instantiations of curriculum learning, and the third comparison isregular sequence learning without a curriculum learning aspect.6Under review as a conference paper at ICLR 2017Regular sequence learningThe baseline method is regular sequence learning; here, all training data is used from theoutset.Incremental Sequence Learning: increasing sequence lengthPredicting the second step of a sequence given the first step is a straightforward mappingproblem that can be handled using regular supervised learning methods. The predictionof later steps in the sequence can potentially depend on all preceding steps, and for somecases may only be learned once an effective internal representation has been developedthat summarizes relevant information present in the preceding part of the sequence. Forpredicting the 17thstep for example, the available input consist of the previous 16 steps,and the network must learn to construct a compact representation of the preceding steps thathave been seen. More specifically, it must be able to distinguish between subspaces of thesequence space that correspond to different distributions for the next step in the sequence.The number of possible contexts grows exponentially with the position in the sequence, andthe task of summarizing the preceding sequence therefore potentially becomes more difficultas a function of the position within the sequence. The problem of learning to predict stepslater on in the sequence is therefore potentially much harder than learning to predict theearlier steps. In Incremental Sequence Learning therefore, the length of sequences presentedto the network is increased as learning progresses.Increasing training set sizeBengio et al. (2009) describe an application of curriculum learning to sequence learning,where the task is to predict the best word which can follow a given context of words in acorrect English sentence. The curriculum strategy used there is to grow the vocabulary size.Transferring this to the context of pen stroke sequence generation, the most straightforwardtranslation is to use subsets of the training data that grow in size, where the order of examplesthat are added to the training set is random.Increasing number of classesThe network is first presented with sequences from only one digit class; e.g. all zeros . Thenumber of classes is increased until all 10 digit classes are represented in the training data.All three curriculum learning methods employ a threshold criterion based on the training RMSE;once a specified level of the RMSE has been reached, the set of training examples (determined by thenumber of sequence steps used, the number of sequences used, or the number of digits) is increased.We note that many possible variants of this simple adaptive scheme are possible, some of which mayprovide improvements of the results.5 E XPERIMENTAL SETTINGSIn this section, we describe the experimental setup in detail.The configuration of the baseline method, regular sequence learning, is as follows. The number ofmixture components M= 17, two hidden layers of size 200 are used. A batch size of 50 sequencesper batch is used in these first experiments. The learning rate is = 0:0025 , with a decay rate of0.99995 per epoch. The order of training sequences (not steps within the sequences) is randomized.The weight of the regularization component = 0:25. In these first experiments, a subset of 10 000training sequences and 5 000 test sequences is used. The error measure in these figures is the RMSEof the pen offsets (unscaled) predicted by the network given the previous pen movement.The RMSE is calculated based on the difference between the predicted and actual (dx;dy )pairs,scaled back to their original range of pixel units, so as to obtain an interpretable error; the eosandeodcomponents of the error, which do form part of the loss, are not used in this error measure. For themethod where the sequence length is varied, the number of individual points (input-target pairs) thatmust be processed per sequence varies over the course of a run. The number of sequences processed(or collections thereof such as batches or epochs) is therefore no longer an adequate measure ofcomputational expense; performance is therefore reported as a function of the number of pointsprocessed.Details per method:7Under review as a conference paper at ICLR 2017Incremental Sequence LearningThe initial sequence length is 2, meaning that the first two points of each sequence are used,i.e. after feeding the first point as input, the second point is to be predicted. Once the trainingRMSE drops below the threshold value of 4, the length is doubled, up to the point where itreaches the maximum sequence length.Increasing training set sizeThe initial training set size is 10. Each time the RMSE threshold of 4 is reached, this amountis doubled, up to the point where the complete set of training sequences is used.Increasing number of digit classesThe initial number of classes is 1, meaning that only sequences representing the first digit(zero) are used. Each time the RMSE threshold of 4 is reached, this amount is doubled, upto the point where all 10 digit classes are used.6 E XPERIMENTAL RESULTS6.1 S EQUENCE PREDICTION : COMPARISON OF THE METHODSFigures 5 shows a comparison of the results of the four methods. The baseline method (in red) doesnot use curriculum learning, and is presented with the entire training set from the start. IncrementalSequence Learning (in green) performs markedly better than all comparison methods. It reaches thebest test performance of the baseline methods twenty times faster ; see the horizontal dotted black line.Moreover, Incremental Sequence Learning greatly improves generalization; on this subset of the data,the average test performance over 10 runs reaches 1.5 for Incremental Sequence Learning vs 3.9 forregular sequence learning, representing a reduction of the error of 74%.0e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 1: RNN, sequence−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 5: Comparison of the test error of the four methods, averaged over ten runs. The dotted linesindicate, at each point in time, which fraction of the training data has been made available at thatpoint for the method of the corresponding color.8Under review as a conference paper at ICLR 2017We furthermore note that the variance of the test error is substantially lower than for each of the othermethods, as seen in the performance graphs; and where the three comparison methods reach theirbest test error just before 4106processed sequence steps and then begin to deteriorate, the test errorfor incremental sequence learning continues to steadily decrease over the course of the run.Method Test set errorRegular sequence learning 7.82Incremental sequence learning 2.06Incremental number of classes 7.64Incremental number of sequences 6.27Table 2: Best value for the average over 10 runs of the test set error obtained by each of the methodsin Experiment 1. Incremental Sequence Learning achieves a reduction of 74% compared to regularsequence learning.The two other curriculum methods do not provide any speedup or advantage compared to the baselinemethod, and in fact result in a higher test error; indiscriminate application of the curriculum learningprinciple apparently does not guarantee improved results and it is important therefore to discoverwhich forms of curriculum learning can confer an advantage.To explain the dramatic improvement achieved by Incremental Sequence Learning, we consider twopossible hypotheses:H1: The number of sequences per batch is fixed (50), but the number of sequence steps or pointsvaries, and is initially much smaller (2) for Incremental Sequence Learning. Thus, when measuredin terms of the number of points that are being processed, the batch size for Incremental SequenceLearning is initially much smaller than for the remaining methods, and it increases adaptively overtime. HypothesisH1therefore is that (A) the smaller batch size improves performance, see Keskaret al. (2016) for earlier findings in this direction, and/or (B) the adaptive batch size aspect has apositive effect on performance.H2: Effectively learning later parts of the sequence requires an adequate internal representation ofthe preceding part of the sequence, which must be learned first; this formed the motivation for theIncremental Sequence Learning method.To test the first hypothesis, H1, we design a second experiment where the batch size is no longerdefined in terms of the number of sequences, but in terms of the number of points or sequence steps,where the number of points is chosen such that the expected total number of points for the baselinemethod remains the same. Thus, whereas a batch for regular sequence learning contains 50 sequencesof length 40 on average yielding 2000 points, Incremental Sequence Learning will start out withbatches containing 1000 sequences of 2 points each, yielding the same total number of points.Figure 6 shows the results. This change reduces the speedup during the earlier part of the runs, andthus partially explains the improvements observed with Incremental Sequence Learning. However,part of the speedup is still present, and moreover the three other observed improvements remain:Incremental Sequence Learning still features strongly improved generalization performanceIncremental Sequence Learning still has a much lower variance of the test errorIncremental Sequence Learning still continues improving at the point where the test perfor-mance of all other methods starts deterioratingIn summary, the adaptive and initially smaller batch size of Incremental Sequence Learning explainspart of the observed improvements, but not all. We therefore test to what extent hypothesis H2playsa role. To see whether the ability to first learn a suitable representation based on the earlier parts ofthe sequences plays a role, we compare the situation where this effect is ruled out. A straightforwardway to achieve this is to use Feed-Forward Neural Networks (FFNNs); whereas Recurrent NeuralNetworks (RNNs) are able to learn such a representation by learning to build up relevant internalstate, FFNNs lack this ability. Therefore if any advantage of Incremental Sequence Learning is seenwhen using FFNNs, it cannot be due to hypothesis H2. Conversely, if using FFNNs removes theadvantage, the advantage must have be due to the difference between FFNNs and RNNs, whichexactly corresponds to the ability to build up an informative internal representation, i.e. H2. Since9Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 2: RNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 6: Comparison of the test error of the four methods, averaged over ten runs.we want to explain the remaining part of the effect, we also use a batch size based on the number ofpoints, as in Experiment 2.Figure 7 shows the results. As the figure shows, when using FFNNs, the advantage of IncrementalSequence Learning is entirely lost. This provides a clear demonstration that both of the hypotheses H1andH2play a role. Together the two hypotheses explain the total effect of the difference, suggestingthat the proposed hypotheses are also the only explanatory factors that play a role.It is interesting to compare the performance of the RNN and their FFNN variants, by comparingthe results of Experiments 2 and 3. From this comparison, it is seen that for Incremental SequenceLearning, the RNN variant achieves improved performance compared to the FFNN variant, as wouldbe expected, since a FFNN cannot make use of any knowledge of the preceding part of the sequenceand is thus limited to learning a general mapping between two subsequent pen offsets pairs (dxk;dyk)and(dxk+1;dyk+1). However, it is the only method of the four to do so; for all three other methods,around the point where test performance for the RNN variants starts to deteriorate (after around4106processed sequence steps), FFNN performance continues to improve and surpasses that of theRNN variants. This suggests that Incremental Sequence Learning is the only method that is able toutilize information about the preceding part of the sequence, and thereby surpass FFNN performance.In terms of absolute performance, a strong further improvement can be obtained by using the entiretraining set, as will be seen in the next section. These results suggest that learning the earlier parts ofthe sequence first can be instrumental in sequence learning.6.2 L OSS AS A FUNCTION OF SEQUENCE POSITIONTo further analyze why variation of the sequence length has a particularly strong effect on sequencelearning, we evaluate how the relative difficulty of learning a sequence step relates to the positionwithin the sequence. To do so, we measure the average loss contribution of the points or steps withina sequence as a function of their position within the sequence, as obtained with a learning method that10Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 3: FFNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 7: Comparison of the test error of the four methods, averaged over ten runs.learns entire sequences (no incremental learning), averaged over the first hundred epochs of training.Figure 8 shows the results.0 10 20 30 40 50 60−50 −40 −30 −20 −10 0Loss vs. sequence positionPosition in sequenceLossFigure 8: The figure shows the average loss contribution of the points or steps within a sequenceas a function of their position within the sequence (see text). The first steps are fundamentallyunpredictable. Once some context has been received, the loss for the next steps steeply drops. Lateron in the sequence however, the loss increases strongly. This effect may be explained by the fact thatthe number of possible preceding contexts increases exponentially, thus posing stronger requirementson the learning system for steps later on in the sequence, and/or by the point that later parts of thesequences can only be learned adequately once earlier parts have been learned first, as later steps candepend on any of the earlier steps.11Under review as a conference paper at ICLR 2017The first steps are fundamentally unpredictable as the network cannot know which example it willreceive next; accordingly, at the start of the sequence, the error is high, as the method cannot knowin advance what the shape or digit class of the new sequence will be. Once the first steps of thesequence have been received and the context increasingly narrows down the possibilities, the loss forthe prediction of the next steps steeply drops. Subsequently however, as the position in the sequenceadvances, the loss increases strongly, and exceeds the initial uncertainty of the first steps. This effectmay be explained by the fact that the number of possible preceding contexts increases exponentially,thus posing stronger requirements on the learning system for steps later on in the sequence.6.3 R ESULTS ON THE FULL MNIST P ENSTROKE SEQUENCE DATA SETThe results reported so far were based on a subset of 10000 training sequences and 5000 testsequences, in order to complete a sufficient number of runs for each of the experiments within areasonable amount of time. Given the positive results obtained with Incremental Sequence Learning,we now train this method on the full MNIST Pen Stroke Sequence Data Set, consisting of 60000training sequences and 10000 test sequences (Experiment 4). In these experiments, a batch size of500 sequences instead of 50 is used.Figure 9 shows the results. Compared to the performance of the above experiments, a strongimprovement is obtained by training on this larger set of examples; whereas the best test error inthe results above was slightly above 1.5, the test performance for this experiment drops below one;a test error of 0.972 on the full test data set is obtained. An strking finding is that while initiallythe test error is much larger than the train error, the test error continues to improve for a long time,and approaches the training error very closely; in other words, no overtraining is observed even forrelatively long runs where the training performance appears to be nearly converged.6.4 T RANSFER LEARNINGThe first task considered here was to perform sequence learning: predicting step t+1 of a sequencegiven step t. To adequately perform this task, the network must learn to detect which digit it is beingfed; the initial part of a sequence representing a 2 or 3 for example is very similar, but as evidence isgrowing that the current sequence represents a 3, that information is vital in predicting how the strokewill continue.Given that the network is expected to have built up some representation of what digit it is reading, aninteresting test is to see whether it is able to switch to the task of sequence classification . The inputpresentation remains the same: at every time step, the recurrent neural network is fed one step ofthe sequence of pen movements representing the strokes of a digit. However, we now also read theoutput of the 10 binary class variable outputs. The target for these is a one-hot representation of thedigit, i.e. the target value for the output corresponding to the digit is one, and all nine other targetvalues are zero. To obtain the output, softmax is used, and the sequence classification loss LCfor theclassification outputs is the cross entropy, weighted by a factor = 10 :LC= 1NNXn=1[ynlog^yn+ (1yn)log(1^yn)]!In the following experiments, the loss consists of the sequence classification loss LC, to whichoptionally the earlier sequence prediction loss LPis added, regulated by a binary parameter :L=LC+LPThe network is asked for a prediction of the digit class after each step it receives. Clearly, accurateclassification is impossible during the first part of a sequence; before the first point is received, thesequence could represent any of the 10 digits with equal probability. As the sequence is received stepby step however, the network receives more information. The prediction produced after receiving theone-but-last step of the sequence, i.e. at the point where the network was previously asked to predictthe last step, is used as its final answer for predicting the digit class.We compare the following variants:12Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 50 100 150Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08 2.5e+08 3.0e+081 2 3 4 5 6 7Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error1 2 3 4 5 6 7Figure 9: Performance on full MNIST Pen Stroke Sequence Data Set, zoomed to first part of the runand same experiment, results for the full run.13Under review as a conference paper at ICLR 2017Transfer learning: sequence classification and sequence predictionStarting from a trained sequence prediction model as obtained in Experiment 4, the earlierloss function is augmented with the sequence classification loss: L=LC+LPTransfer Learning: sequence classification onlyStarting from a trained sequence prediction model, the loss function is switched such that itonly reflects the classification performance, and no longer tracks the sequence predictionperformance:L=LCLearning from scratch, sequence classification and sequence predictionIn this variant, learning starts from scratch, and both classification loss and prediction lossare used, as in the first experiment: L=LC+LPLearning from scratch, sequence classification onlyL=LC0e+00 2e+07 4e+07 6e+07 8e+070.00.20.40.60.81.0Experiment 5: Transfer learningfrom sequence prediction to sequence classificationNumber of sequence steps processedFraction of correct predictionsTransfer Learning, classification onlyTransfer learning, classification and predictionLearn from scratch, classification onlyLearn from scratch, classification and prediction0.150.350.550.750.95Figure 10: Using the sequence prediction model as a starting point for sequence classification: startingfrom a trained sequence prediction network, the task is switched to predicting the class of the digit(red and black lines). A comparison with learning a digit classification model from scratch (blue andgreen lines) shows that the internal state built up to predict sequence steps is helpful in predicting theclass of the digit represented by the sequence.Figure 10 shows the results; indeed the network is able to build further on its ability to predict penstroke sequences, and learns the sequence classification task faster and more accurately than anidentical network that learns the sequence classification task from scratch; in this first and straight-forward transfer learning experiment based on the MNIST stroke sequence data set, a classificationaccuracy of 96.0% is reached1. We note that performance on the MNIST sequence data cannot becompared to results obtained with the original MNIST data set, as the information in the input data isvastly reduced. This result sets a first baseline for the MNIST stroke sequence data set; we expectthere is ample room for improvement. Simultaneously learning sequence prediction and sequenceclassification does not appear to provide an advantage, neither for transfer learning nor for learningfrom scratch.1This performance was reached after training for 7107sequence steps, i.e. roughly twice as long as the runshown in the chart14Under review as a conference paper at ICLR 20177 G ENERATIVE RESULTSTo gain insight into what the network has learned, in this section we report examples of output of thenetwork.7.1 D EVELOPMENT DURING TRAININGDuring training, the network receives each sequence step by step, and after each step, it outputs itsexpectation of the offset of the next point. In these figures and movies, we visualize the predictionsof the network for a given sequence at different stages of the training process. All results have beenobtained from a single run of Incremental Sequence Learning.After 80 batches After 140 batches After 530 batches After 570 batches After 650 batchesFigure 11: Movie showing what the network has learned over time. The movie shows the output forthree sequences of the test data at different stages during training. To view, click the image or visitthis link: https://edwin-de-jong.github.io/blog/isl/rnn-movies/generative-rnn-training-movie.gif .7.2 U NGUIDED OUTPUT GENERATION ,A.K.A.NEURAL NETWORK HALLUCINATIONAfter training, the trained network can be used to generate output independently. The guidance that ispresent during training in the form of receiving each next step of the sequence following a predictionis not available here. Instead, the output produced by the network is fed back into the network as itsnext input, see Figures 12 and 13. Figure 14 shows example results.Figure 12: Training: the target of a trainingstep is used as the next input.Figure 13: Generation: the output of the net-work is used as the next input.15Under review as a conference paper at ICLR 2017Output resembling a 2 Output resembling a 3 Output resembling a 4Figure 14: Unguided output of the network: after each step, the network’s output is fed back as thenext input. Clearly, the network has learned the ability to independently produce long sequencesrepresenting different digits that occurred in training data.7.3 S EQUENCE CLASSIFICATIONThe third analysis of the behavior the trained network is to view what happens during sequenceclassification. At each step of the sequence, we monitor the ten class outputs and visualize theiroutput. As more steps of the sequence are being received, the network receives more information,and adjusts its expectation of what digit class the sequence represents.●●MNIST stroke sequence test image 25●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 0. Ini-tially, as the downward part ofthe curved stroke is being re-ceived, the network believesthe sequences represents a 4.After passing the lowest pointof the figure, it assigns higherlikelihood to a 6. Only at thevery end, just in time beforethe sequence ends, the predic-tion of the network switchesfor the last time, and a highprobability is assigned to thecorrect class.●●MNIST stroke sequence test image 18●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●10203040500 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 3. Ini-tially, the networks estimatesthe sequence to represent a 7.Next, it expects a 2 is morelikely. After 20 points havebeen received, it concludes(correctly) that the sequencesrepresents a 3.●●MNIST stroke sequence test image 62●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for asequence representing a 9.While receiving the sequence,the dominant prediction of thenetwork is that the sequencerepresents a five; the openloop of the 9 and the straighttop line may contribute to this.When the last points are re-ceived, the network considera 9 to be more likely, but someambiguity remains.8 C ONCLUSIONSThere are many possible ways to apply the principles of incremental or curriculum learning tosequence learning, but so far a general understanding of which forms of curriculum sequence learninghave a positive effect is missing. We have investigated a particular approach to sequence learningwhere the training data is initially limited to the first few steps of each sequence. Gradually, as the16Under review as a conference paper at ICLR 2017network learns to predict the early parts of the sequences, the length of the part of the sequences usedfor training is increased. We name this approach Incremental Sequence Learning, and find that itstrongly improves sequence learning performance. Two other forms of curriculum sequence learningused for comparison did not display improvements compared to regular sequence learning. Theorigins of this performance improvement are analyzed in comparison experiments, as detailed below.A first observation was that with Incremental Sequence Learning, the time required to attain the besttest performance level of regular sequence learning was much lower; on average, the method reachedthis level twenty times faster, thus achieving a significant speedup and reduction of the computationalcost of sequence learning. More importantly, Incremental Sequence Learning was found to reducethe test error of regular sequence learning by 74%.To analyze the cause of the observed speedup and performance improvements, we first increasethe number of sequences per batch for Incremental Sequence Learning, so that all methods use thesame number of sequence steps per batch. This reduced the speedup, but the improvement of thegeneralization performance was maintained. We then replaced the RNN layers with feed forwardnetwork layers, so that the networks can no longer maintain information about the earlier part ofthe sequences. This completely removed the remaining advantage. This provides clear evidencethat the improvement in generalization performance is due to the specific ability of an RNN tobuild up internal representations of the sequences it receives, and that the ability to develop theserepresentations is aided by training on the early parts of sequences first.Next, we trained Incremental Sequence Learning on the full MNIST stroke sequence data set, andfound that the use of this larger training set further improves sequence prediction performance. Thetrained model was then used as a starting point for transfer learning, where the task was switchedfrom sequence prediction to sequence classification .We conclude that Incremental Sequence Learning provides a simple and easily applicable approachto sequence learning that was found to produce large improvements in both computation time andgeneralization performance. The dependency of later steps in a sequence on the preceding steps ischaracteristic of virtually all sequence learning problems. We therefore expect that this approach canyield improvements for sequence learning applications in general, and recommend its usage, giventhat exclusively positive results were obtained with the approach so far.9 R ESOURCESThe Tensorflow implementation that was used to perform these experiments is available here: https://github.com/edwin-de-jong/incremental-sequence-learningThe MNIST stroke sequence data set is available for download here: https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/MNIST-digits-stroke-sequence-dataThe code for transforming the MNIST digit data set to a pen strokesequence data set has also been made available: https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/wiki/MNIST-digits-as-stroke-sequences-(code)ACKNOWLEDGMENTSThe author would like to thank Max Welling, Dick de Ridder and Michiel de Jong for valuablecomments and suggestions on earlier versions.REFERENCESBengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. (2015). Scheduled sampling for sequenceprediction with recurrent neural networks. In Proceedings of the 28th International Conference onNeural Information Processing Systems , NIPS’15, pages 1171–1179, Cambridge, MA, USA. MITPress.17Under review as a conference paper at ICLR 2017Bengio, Y ., Courville, A., and Vincent, P. (2013). Representation learning: A review and newperspectives. IEEE Trans. Pattern Anal. Mach. Intell. , 35(8):1798–1828.Bengio, Y ., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedingsof the 26th Annual International Conference on Machine Learning , ICML ’09, pages 41–48, NewYork, NY , USA. ACM.Bishop, C. (1994). Mixture density networks. Technical Report NCRG/94/0041, Aston University.Caruana, R. (1997). Multitask learning. Mach. Learn. , 28(1):41–75.Ciresan, D. C., Meier, U., and Schmidhuber, J. (2012). Multi-column deep neural networks for imageclassification. CoRR , abs/1202.2745.de Jong, E. D. and Oates, T. (2002). A coevolutionary approach to representation development.Proceedings of the ICML-2002 Workshop on Development of Representations , pages 1–6.Elman, J. L. (1991). Incremental learning, or the importance of starting small. crl technical report9101. Technical report, University of California, San Diego.Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small.Cognition , 48:781–99.Giraud-Carrier, C. (2000). A note on the utility of incremental learning. AI Commun. , 13(4):215–223.Graves, A. (2013). Generating sequences with recurrent neural networks. CoRR , abs/1308.0850.He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. CoRR ,abs/1512.03385.Hinton, G. E. and Nair, V . (2005). Inferring motor programs from images of handwritten digits. InAdvances in Neural Information Processing Systems 18 [Neural Information Processing Systems,NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada] , pages 515–522.Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation , 9(8):1735–1780.Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batchtraining for deep learning: Generalization gap and sharp minima. CoRR , abs/1609.04836.LeCun, Y . and Cortes, C. (2010). MNIST handwritten digit database.Lipton, Z. C. (2015). A critical review of recurrent neural networks for sequence learning. CoRR ,abs/1506.00019.Moriarty, D. E. (1997). Symbiotic Evolution Of Neural Networks In Sequential Decision Tasks . PhDthesis, Department of Computer Sciences, The University of Texas at Austin. Technical ReportUT-AI97-257.Parisotto, E., Ba, L. J., and Salakhutdinov, R. (2015). Actor-mimic: Deep multitask and transferreinforcement learning. CoRR , abs/1511.06342.Pratt, L. Y . (1993). Discriminability-based transfer between neural networks. In Advances in NeuralInformation Processing Systems 5, [NIPS Conference] , pages 204–211, San Francisco, CA, USA.Morgan Kaufmann Publishers Inc.Rusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., and Hadsell, R. (2016). Sim-to-realrobot learning from pixels with progressive nets. arxiv:1610.04286 [cs.ro]. Technical report, DeepMind.Schlimmer, J. C. and Granger, R. H. (1986). Incremental learning from noisy data. Machine Learning ,1(3):317–354.Schmidt, M., Murphy, K., Fung, G., and Rosales, R. (2008). Structure learning in random fields forheart motion abnormality detection. In In CVPR .18Under review as a conference paper at ICLR 2017Sun, R. and Giles, C. L. (2001). Sequence learning: from recognition and prediction to sequentialdecision making. IEEE Intelligent Systems , 16(4):67–70.Thrun, S. (1996). Is learning the n-th thing any easier than learning the first. In Advances in NeuralInformation Processing Systems , volume 8, pages 640–646.Zaremba, W. and Sutskever, I. (2014). Learning to execute. CoRR , abs/1410.4615.Zhang, T. Y . and Suen, C. Y . (1984). A fast parallel algorithm for thinning digital patterns. Commun.ACM , 27(3):236–239.19
SJgsIjLEl
rJg_1L5gg
ICLR.cc/2017/conference/-/paper278/official/review
{"title": "Really long paper with not a lot of impact", "rating": "3: Clear rejection", "review": "First up, I want to point out that this paper is really long. Like 17 pages long -- without any supplementary material. While ICLR does not have an official page limit, it would be nice if authors put themselves in the reviewer's shoes and did not take undue advantage of this rule. Having 1 or 2 pages in addition to the conventional 8 page limit is ok, but more than doubling the pages is quite unfair. \n\nNow for the review: The paper proposes a new artificial dataset for sequence learning. I call it artificial because it was artificially generated from the original MNIST dataset which is a smallish dataset of real images of handwritten digits. In addition to the dataset, the authors propose to train recurrent networks using a schedule over the length of the sequence, which they call \"incremental learning\". The experiments show that their proposed schedule is better than not having any schedule on this data set. Furthermore, they also show that their proposed schedule is better than a few other intuitive schedules. The authors verify this by doing some ablation studies over the model on the proposed dataset. \n\nI have following issues with this paper: \n\n-- I did not find anything novel in this paper. The proposed incremental learning schedule is nothing new and is a natural thing to try when learning sequences. Similar idea have already been tried by a number of authors, including Bengio 2015, and Ranzato 2015. The only new piece of work is the ablation studies which the authors conduct to tease out and verify that indeed the improvement in performance is due to the curriculum used. \n\n-- Furthermore, the authors only test their hypothesis on a single dataset which they propose and is artificially generated. Why not use it on a real sequential dataset, such as, language modeling. Does the technique not work in that scenario? In fact I am quite positive that for language modeling where the vocabulary size is huge, the performance gains will be no where close to the 74% reported in the paper.\n\n-- I'm not convinced about the value of having this artificial dataset. Already there are so many real world sequential dataset available, including in text, speech, finance and other areas. What exactly does this dataset bring to the table is not super clear to me. While having another dataset may not be a bad thing in itself, I almost felt that this dataset was created for the sole purpose of making the proposed ideas work. It would have been so much better had the authors shown experiments on other datasets. \n\n-- As I said, the paper is way too long. A significant part of the length of the paper is due to a collection of experiments which are completely un-related to the main message of the paper. For instance, the experiment in Section 6.2 is completely unrelated to the story of the paper. Same is true with the transfer learning experiments of Section 6.4.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Incremental Sequence Learning
["Edwin D. de Jong"]
Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch.
["Deep learning", "Supervised Learning"]
https://openreview.net/forum?id=rJg_1L5gg
https://openreview.net/pdf?id=rJg_1L5gg
https://openreview.net/forum?id=rJg_1L5gg&noteId=SJgsIjLEl
Under review as a conference paper at ICLR 2017INCREMENTAL SEQUENCE LEARNINGEdwin D. de JongDepartment of Information and Computing SciencesUtrecht Universityhttps://edwin-de-jong.github.io/ABSTRACTDeep learning research over the past years has shown that by increasing the scopeor difficulty of the learning problem over time, increasingly complex learningproblems can be addressed. We study incremental learning in the context ofsequence learning, using generative RNNs in the form of multi-layer recurrentMixture Density Networks. While the potential of incremental or curriculumlearning to enhance learning is known, indiscriminate application of the principledoes not necessarily lead to improvement, and it is essential therefore to knowwhich forms of incremental or curriculum learning have a positive effect. Thisresearch contributes to that aim by comparing three instantiations of incremental orcurriculum learning.We introduce Incremental Sequence Learning , a simple incremental approach tosequence learning.Incremental Sequence Learning starts out by using only the first few steps of eachsequence as training data. Each time a performance criterion has been reached, thelength of the parts of the sequences used for training is increased.To evaluate Incremental Sequence Learning and comparison methods, we introduceand make available a novel sequence learning task and data set: predicting andclassifying MNIST pen stroke sequences, where the familiar handwritten digitimages have been transformed to pen stroke sequences representing the skeletonsof the digits.We find that Incremental Sequence Learning greatly speeds up sequence learningand reaches the best test performance level of regular sequence learning 20 timesfaster, reduces the test error by 74%, and in general performs more robustly; itdisplays lower variance and achieves sustained progress after all three comparisonmethods have stopped improving. The two other instantiations of curriculumlearning do not result in any noticeable improvement. A trained sequence predictionmodel is also used in transfer learning to the task of sequence classification, whereit is found that transfer learning realizes improved classification performancecompared to methods that learn to classify from scratch.1 I NTRODUCTION1.1 I NCREMENTAL LEARNING , TRANSFER LEARNING ,AND REPRESENTATION LEARNINGDeep learning research over the past years has shown that by increasing the scope or difficulty of thelearning problem over time, increasingly complex learning problems can be addressed. This principlehas been described as Incremental learning by Elman (1991), and has a long history. Schlimmer andGranger (1986) described a pseudo-connectionist distributed concept learning approach involvingincremental learning. Elman (1991) defined Incremental Learning as an approach where the trainingdata is not presented all at once, but incrementally; see also Elman (1993). Giraud-Carrier (2000)defines Incremental Learning as follows: “A learning task is incremental if the training examples usedto solve it become available over time, usually one at a time.“ Bengio et al. (2009) introduced theframework of Curriculum Learning. The central idea behind this approach is that a learning system isguided by presenting gradually more and/or more complex concepts. A formal definition is providedspecifying that the distribution over examples converges monotonically towards the target training1Under review as a conference paper at ICLR 2017distribution, and that the entropy of the distributions visited over time, and hence the diversity oftraining examples, increases.An extension of the notion of incremental learning is to also let the learning task vary over time.This approach, known as Transfer Learning or Inductive Transfer, was first described by Pratt (1993).Thrun (1996) reported improved generalization performance for lifelong learning and describedrepresentation learning , whereas Caruana (1997) considered a Multitask learning setup wheretasks are learned in parallel while using a shared representation. In coevolutionary algorithms, thecoevolution of representations with solutions that employ them, see e.g. Moriarty (1997); de Jongand Oates (2002), provides another approach to representation learning. Representation learning canbe seen as a special form of transfer learning, where one goal is to learn adequate representations,and the other goal, addressed in parallel or sequentially, is to use these representations to address thelearning problem.Several of the recent successes of deep learning can be attributed to representation learning andincremental learning. Bengio et al. (2013) provide a review and insightful discussion of representationlearning. Parisotto et al. (2015) report experiments with transfer learning across Atari 2600 arcadegames where up to 5 million frames of training time in each game are saved. More recently, successfultransfer of robot learning from the virtual to the real world was achieved using transfer learning, seeRusu et al. (2016). And at the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC),the depth of networks has steadily increased over the years, so far leading up to a network of 152layers for the winning entry in the ILSVRC 2015 classification task; see He et al. (2015).1.2 S EQUENCE LEARNINGWe study incremental learning in the context of sequence learning . The aim in sequence learningis to predict, given a step of the sequence, what the next step will be. By iteratively feeding thepredicted output back into the network as the next input, the network can be used to produce acomplete sequences of variable length. For a discussion of variants of sequence learning problems,see Sun and Giles (2001); a more recent treatment covering recurrent neural networks as used here isprovided by Lipton (2015).An interesting challenge in sequence learning is that for most sequence learning problems of interest,the next step in a sequence does not follow unambiguously from the previous step. If this werethe case, i.e. if the underlying process generating the sequences satisfies the Markov property, thelearning problem would be reduced to learning a mapping from each step to the next. Instead, stepsin the sequence may depend on some or all of the preceding steps in the sequence. Therefore, a mainchallenge faced by a sequence learning model is to capture relevant information from the part ofthe sequence seen so far. This ability to capture relevant information about future sequences it mayreceive must be developed during training; the network must learn the ability to build up internalrepresentations which encode relevant aspects of the sequence that is received.1.3 I NCREMENTAL SEQUENCE LEARNINGThe dependency on the partial sequence received so far provides a special opportunity for incrementallearning that is specific to sequence learning. Whereas the examples in a supervised learning problembear no known relation to each other, the steps in a sequence have a very specific relation; later stepsin the sequence can only be learned well once the network has learned to develop the appropriateinternal state summarizing the part of the sequence seen so far. This observation leads to the idea thatsequence learning may be expedited by learning to predict the first few steps in each sequence firstand, once reasonable performance has been achieved and (hence) a suitable internal representation ofthe initial part of the sequences has been developed, gradually increasing the length of the partialsequences used for training.Aprefix of a sequence is a consecutive subsequence (a substring) of the sequence starting fromthe first element; e.g. the prefix S3of a sequence Sconsists of the first 3 steps of S. We defineIncremental Sequence Learning as an approach to sequence learning whereby learning starts out byusing only a short prefix of each sequence for training, and where the length of the prefixes used fortraining is gradually increased, up to the point where the complete sequences are used. The structureof sequence learning problems suggests that adequate modeling of the preceding part of the sequence2Under review as a conference paper at ICLR 2017is a requirement for learning later parts of the sequence; Incremental Sequence Learning draws theconsequence of this by learning to predict the earlier parts of the sequences first.1.4 R ELATED WORKIn presenting the framework of Curriculum Learning, Bengio et al. (2009) provide an examplewithin the domain of sequence learning, more specifically concerning language modeling. There, thevocabulary used for training on word sequences is gradually increased, i.e. the subset of sequencesused for training is gradually increased; this is analogous to one of the comparison methods usedhere. Another specialization of Curriculum Learning to the context of sequence learning describedby Bengio et al. (2015) addresses the discrepancy between training , where the true previous stepis presented as input, and inference , where the previous output from the network is used as input;with scheduled sampling , the probability of using the network output as input is adapted to graduallyincrease over time. Zaremba and Sutskever (2014) apply curriculum learning in a sequence-to-sequence learning context where a neural network learns to predict the outcome of Python programs.The generation of programs forming the training data is parameterized by two factors that control thecomplexity of the programs: the number of digits of the numbers used in the programs and the degreeof nesting. While a number of different instantiations of incremental or curriculum learning havebeen described in the context of sequence learning, no clear guidance is available on which forms areeffective. The particular form explored here of learning to predict the earlier parts of sequences firstis straightforward, it makes use of the particular structure of sequence learning problems, and it iseasy to implement; yet it has received very limited attention so far.2 MNIST H ANDWRITTEN DIGITS AS PENSTROKE SEQUENCES2.1 M OTIVATION FOR REPRESENTING DIGITS AS PEN STROKE SEQUENCESThe classification of MNIST digit images, see LeCun and Cortes (2010), is one example of a task onwhich the success of deep learning has been demonstrated convincingly; a test error rate of 0.23% wasobtained by Ciresan et al. (2012) using Multi-column Deep Neural Networks. To obtain a sequencelearning data set for evaluating Incremental Sequence Learning, we created a variant of the familiarMNIST handwritten digit data set provided by LeCun and Cortes (2010) where each digit image istransformed into a sequence of pen strokes that could have generated the digit.One motivation for representing digits as strokes is the notion that when humans try to discern digitsor letters that are difficult to read, it appears natural to trace the line so as to reconstruct what paththe author’s pen may have taken. Indeed, Hinton and Nair (2005) note that the idea that patterns canbe recognized by figuring out how they were generated was already introduced in the 1950’s, anddescribe a generative model for handwritten digits that uses two pairs of opposing springs whosestiffnesses are controlled by a motor program.Pen stroke sequences also form a natural and efficient representation for digits; handwriting constitutesa canonical manifestation of the manifold hypothesis, according to which “real-world data presentedin high dimensional spaces are expected to concentrate in the vicinity of a manifold Mof muchlower dimensionality dM, embedded in high dimensional input space Rdx”; see Bengio et al. (2013).Specifically: (i) the vast majority of the pixels are white, (ii) almost all digit images consist of a singleconnected set of pixels, and (iii) the shapes mostly consist of smooth curved lines. This suggests thatcollections of pen strokes form a natural representation for the purpose of recognizing digits.The relevance of the manifold hypothesis can also be appreciated by considering the space of all 2-D28x28 binary pixel images; when sampling uniformly from this space, one is likely to only encounterimages resembling TV noise, and the chances of observing any of the 70000 MNIST digit imagesis astronomically small. By contrast, a randomly generated pen stroke sequence is not unlikely toresemble a part of a digit, such as a short straight or curved line segment. This increased alignment ofthe digit data with its representation in the form of pen stroke sequences implies that the amount ofcomputation required to address the learning problem can potentially be vastly reduced.3Under review as a conference paper at ICLR 20172.2 C ONSTRUCTION OF THE PEN STROKE SEQUENCE DATA SETThe MNIST handwritten digit data set consists of 60000 training images and 10000 test images, eachforming 28 x 28 bit map images of written numerical digits from 0 to 9. The digits are transformedinto one or more pen strokes, each consisting of a sequence of pen offset pairs (dx;dy ). To extractthe pen stroke sequences, the following steps are performed:1.Incremental thesholding. Starting from the original MNIST grayscale image, the followingcharacteristics are measured:The number of nonzero pixelsThe number of connected components, for both the 4-connected and 8-connectedvariants.Starting from a thresholding level of zero, the thresholding level is increased stepwise,until either (A) the number of 4-connected or 8-connected components changes, (B) thenumber of remaining pixels drops below 50% of the original number, or (C) the thresholdinglevel reaches a preselected maximum level (250). When any of these conditions occur,the previous level (i.e. the highest thresholding level for which none of these conditionsoccurred) is selected.2. A common method for image thinning, described by Zhang and Suen (1984), is applied.3.After the thresholding and thinning steps, the result is a skeleton of the original digit imagethat mostly consists of single-pixel-width lines.4.Finding a pen stroke sequence that could have produced the digit skeleton can be viewedas a Traveling Salesman Problem where, starting from the origin, all points of the digitskeleton are visited. Each point is represented by the pen offset (dx;dy )from the previousto the current point. For any transition to a non-neighboring pixel (based on 8-connecteddistance), an extra step is inserted with ( dx,dy) = (0, 0) and with eos = 1 (end-of-stroke), toindicate that the current stroke has ended and the pen is to be lifted off the paper. At theend of each sequence, a final step with values (0, 0, 1, 1) is appended. The fourth valuerepresents eod, end-of-digit. This final tuple of the sequence marks that both the currentstroke and the current sequence have ended, and forms a signal that the next input presentedto the network will belong to another digit.Figure 1: The original image (top left), thresholded image, thinned image, and actual extracted penstroke image.4Under review as a conference paper at ICLR 2017(6, 4)(1, -1)(1, 0) (1, 0)(0, 1)(1, 1)(-1, 1)(1, 1)(1, 1)(0, 1)(0, 1)(-1, 1)(-1, 1)(-1, 0) (-1, 0)(-1, -1)Figure 2: Example of a pen stroke image.dxdy eos eod6 4 0 01 -1 0 01 0 0 01 0 0 01 1 0 00 1 0 0-1 1 0 01 1 0 01 1 0 00 1 0 00 1 0 0-1 1 0 0-1 1 0 0-1 0 0 0-1 0 0 0-1 -1 0 00 0 1 1Table 1: Corresponding sequence. The origin is atthe top left, and the positive vertical direction isdownward. From the origin to the first point, thefirst offset is 6 steps to the right and 4 down: (6,4). Then to the second point: 1 to the right and 1up, (1, -1); etc.It is important to note that the thinning operation discards pixels and therefore information; thisimplies that the sequence learning problem constructed here should be viewed as a new learningproblem, i.e. performance on this new task cannot be directly compared to results on the originalMNIST classification task. While for many images the thinned skeleton is an adequate representationthat retains original shape, in other cases relevant information is lost as part of the thinning process.Distribution of Sequence LengthsSequence lengthFrequency0 20 40 60 80 10005001500 2500Figure 3: Distribution of sequence lengths. The average sequence length is approximately 40 steps.3 N ETWORK ARCHITECTUREWe adopt the approach to generative neural networks described by Graves (2013) which makes use ofmixture density networks , introduced by Bishop (1994). One sequence corresponds to one completeimage of a digit skeleton, represented as a sequence of hdx;dy;eos;eodituples, and may containone or more strokes; see previous section.The network has four input units, corresponding to these four input variables. To produce the inputfor the network, the (dx;dy )pairs are scaled to yield two real-valued input variables dxanddy. The5Under review as a conference paper at ICLR 2017variables indicating the end-of-stroke (EOS) and end-of-digit (EOD) are binary inputs. Two hiddenLSTM layers, see Hochreiter and Schmidhuber (1997), of 200 units each are used.Figure 4: Network architecture; see text.The input units receive one step of a sequence at a time, starting with the first step. The goal for theoutput units is to predict the immediate next step in the sequence, but rather than trying to directlypredictdxanddy, the output units represent a mixture of bivariate Gaussians. The output layerconsists of the end of stroke signal (EOS), and a set of means i, standard deviations i, correlationsi, and mixture weights ifor each of theMmixture components, where the number of mixturecomponentsM= 17 was found empirically to yield good results and is used in the experimentspresented here. Additionally, a binary indicator signaling the end of digit (EOD) is used, to mark theend of each sequence. In addition to these output elements for predicting the pen stroke sequences,10 binary class variable outputs are added, representing the 10 digit classes. This facilitates switchingthe task from sequence prediction to sequence classification, as will be discussed later; the output ofthese units is ignored in the sequence prediction experiments. The number of output units depends onthe number of mixture components used, and equals 6M+ 2 + 10 = 114 .For regularization, we found in early experiments that using the maximum weight as a regularizationterm produced better results than using the more common L-2 regularization. This approach can beviewed as L-1-norm regularization, and has been used previously in the context of regularization,see e.g. Schmidt et al. (2008).The definition of the sequence prediction loss LPfollows Graves (2013), with the difference thatterms for the eod and for the L- 1loss are included:L(x) =TXt=1log0@XjjtN(xt+1jjt;jt;jt)1Alogeostif(xt+1)3= 1log (1eost)otherwiselogeodtif(xt+1)4= 1log (1eodt)otherwise+jjwjj14 I NCREMENTAL SEQUENCE LEARNING AND COMPARISON METHODSBelow we describe Incremental Sequence Learning and three comparison methods, where two ofthe comparison methods are other instantiations of curriculum learning, and the third comparison isregular sequence learning without a curriculum learning aspect.6Under review as a conference paper at ICLR 2017Regular sequence learningThe baseline method is regular sequence learning; here, all training data is used from theoutset.Incremental Sequence Learning: increasing sequence lengthPredicting the second step of a sequence given the first step is a straightforward mappingproblem that can be handled using regular supervised learning methods. The predictionof later steps in the sequence can potentially depend on all preceding steps, and for somecases may only be learned once an effective internal representation has been developedthat summarizes relevant information present in the preceding part of the sequence. Forpredicting the 17thstep for example, the available input consist of the previous 16 steps,and the network must learn to construct a compact representation of the preceding steps thathave been seen. More specifically, it must be able to distinguish between subspaces of thesequence space that correspond to different distributions for the next step in the sequence.The number of possible contexts grows exponentially with the position in the sequence, andthe task of summarizing the preceding sequence therefore potentially becomes more difficultas a function of the position within the sequence. The problem of learning to predict stepslater on in the sequence is therefore potentially much harder than learning to predict theearlier steps. In Incremental Sequence Learning therefore, the length of sequences presentedto the network is increased as learning progresses.Increasing training set sizeBengio et al. (2009) describe an application of curriculum learning to sequence learning,where the task is to predict the best word which can follow a given context of words in acorrect English sentence. The curriculum strategy used there is to grow the vocabulary size.Transferring this to the context of pen stroke sequence generation, the most straightforwardtranslation is to use subsets of the training data that grow in size, where the order of examplesthat are added to the training set is random.Increasing number of classesThe network is first presented with sequences from only one digit class; e.g. all zeros . Thenumber of classes is increased until all 10 digit classes are represented in the training data.All three curriculum learning methods employ a threshold criterion based on the training RMSE;once a specified level of the RMSE has been reached, the set of training examples (determined by thenumber of sequence steps used, the number of sequences used, or the number of digits) is increased.We note that many possible variants of this simple adaptive scheme are possible, some of which mayprovide improvements of the results.5 E XPERIMENTAL SETTINGSIn this section, we describe the experimental setup in detail.The configuration of the baseline method, regular sequence learning, is as follows. The number ofmixture components M= 17, two hidden layers of size 200 are used. A batch size of 50 sequencesper batch is used in these first experiments. The learning rate is = 0:0025 , with a decay rate of0.99995 per epoch. The order of training sequences (not steps within the sequences) is randomized.The weight of the regularization component = 0:25. In these first experiments, a subset of 10 000training sequences and 5 000 test sequences is used. The error measure in these figures is the RMSEof the pen offsets (unscaled) predicted by the network given the previous pen movement.The RMSE is calculated based on the difference between the predicted and actual (dx;dy )pairs,scaled back to their original range of pixel units, so as to obtain an interpretable error; the eosandeodcomponents of the error, which do form part of the loss, are not used in this error measure. For themethod where the sequence length is varied, the number of individual points (input-target pairs) thatmust be processed per sequence varies over the course of a run. The number of sequences processed(or collections thereof such as batches or epochs) is therefore no longer an adequate measure ofcomputational expense; performance is therefore reported as a function of the number of pointsprocessed.Details per method:7Under review as a conference paper at ICLR 2017Incremental Sequence LearningThe initial sequence length is 2, meaning that the first two points of each sequence are used,i.e. after feeding the first point as input, the second point is to be predicted. Once the trainingRMSE drops below the threshold value of 4, the length is doubled, up to the point where itreaches the maximum sequence length.Increasing training set sizeThe initial training set size is 10. Each time the RMSE threshold of 4 is reached, this amountis doubled, up to the point where the complete set of training sequences is used.Increasing number of digit classesThe initial number of classes is 1, meaning that only sequences representing the first digit(zero) are used. Each time the RMSE threshold of 4 is reached, this amount is doubled, upto the point where all 10 digit classes are used.6 E XPERIMENTAL RESULTS6.1 S EQUENCE PREDICTION : COMPARISON OF THE METHODSFigures 5 shows a comparison of the results of the four methods. The baseline method (in red) doesnot use curriculum learning, and is presented with the entire training set from the start. IncrementalSequence Learning (in green) performs markedly better than all comparison methods. It reaches thebest test performance of the baseline methods twenty times faster ; see the horizontal dotted black line.Moreover, Incremental Sequence Learning greatly improves generalization; on this subset of the data,the average test performance over 10 runs reaches 1.5 for Incremental Sequence Learning vs 3.9 forregular sequence learning, representing a reduction of the error of 74%.0e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 1: RNN, sequence−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 5: Comparison of the test error of the four methods, averaged over ten runs. The dotted linesindicate, at each point in time, which fraction of the training data has been made available at thatpoint for the method of the corresponding color.8Under review as a conference paper at ICLR 2017We furthermore note that the variance of the test error is substantially lower than for each of the othermethods, as seen in the performance graphs; and where the three comparison methods reach theirbest test error just before 4106processed sequence steps and then begin to deteriorate, the test errorfor incremental sequence learning continues to steadily decrease over the course of the run.Method Test set errorRegular sequence learning 7.82Incremental sequence learning 2.06Incremental number of classes 7.64Incremental number of sequences 6.27Table 2: Best value for the average over 10 runs of the test set error obtained by each of the methodsin Experiment 1. Incremental Sequence Learning achieves a reduction of 74% compared to regularsequence learning.The two other curriculum methods do not provide any speedup or advantage compared to the baselinemethod, and in fact result in a higher test error; indiscriminate application of the curriculum learningprinciple apparently does not guarantee improved results and it is important therefore to discoverwhich forms of curriculum learning can confer an advantage.To explain the dramatic improvement achieved by Incremental Sequence Learning, we consider twopossible hypotheses:H1: The number of sequences per batch is fixed (50), but the number of sequence steps or pointsvaries, and is initially much smaller (2) for Incremental Sequence Learning. Thus, when measuredin terms of the number of points that are being processed, the batch size for Incremental SequenceLearning is initially much smaller than for the remaining methods, and it increases adaptively overtime. HypothesisH1therefore is that (A) the smaller batch size improves performance, see Keskaret al. (2016) for earlier findings in this direction, and/or (B) the adaptive batch size aspect has apositive effect on performance.H2: Effectively learning later parts of the sequence requires an adequate internal representation ofthe preceding part of the sequence, which must be learned first; this formed the motivation for theIncremental Sequence Learning method.To test the first hypothesis, H1, we design a second experiment where the batch size is no longerdefined in terms of the number of sequences, but in terms of the number of points or sequence steps,where the number of points is chosen such that the expected total number of points for the baselinemethod remains the same. Thus, whereas a batch for regular sequence learning contains 50 sequencesof length 40 on average yielding 2000 points, Incremental Sequence Learning will start out withbatches containing 1000 sequences of 2 points each, yielding the same total number of points.Figure 6 shows the results. This change reduces the speedup during the earlier part of the runs, andthus partially explains the improvements observed with Incremental Sequence Learning. However,part of the speedup is still present, and moreover the three other observed improvements remain:Incremental Sequence Learning still features strongly improved generalization performanceIncremental Sequence Learning still has a much lower variance of the test errorIncremental Sequence Learning still continues improving at the point where the test perfor-mance of all other methods starts deterioratingIn summary, the adaptive and initially smaller batch size of Incremental Sequence Learning explainspart of the observed improvements, but not all. We therefore test to what extent hypothesis H2playsa role. To see whether the ability to first learn a suitable representation based on the earlier parts ofthe sequences plays a role, we compare the situation where this effect is ruled out. A straightforwardway to achieve this is to use Feed-Forward Neural Networks (FFNNs); whereas Recurrent NeuralNetworks (RNNs) are able to learn such a representation by learning to build up relevant internalstate, FFNNs lack this ability. Therefore if any advantage of Incremental Sequence Learning is seenwhen using FFNNs, it cannot be due to hypothesis H2. Conversely, if using FFNNs removes theadvantage, the advantage must have be due to the difference between FFNNs and RNNs, whichexactly corresponds to the ability to build up an informative internal representation, i.e. H2. Since9Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 2: RNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 6: Comparison of the test error of the four methods, averaged over ten runs.we want to explain the remaining part of the effect, we also use a batch size based on the number ofpoints, as in Experiment 2.Figure 7 shows the results. As the figure shows, when using FFNNs, the advantage of IncrementalSequence Learning is entirely lost. This provides a clear demonstration that both of the hypotheses H1andH2play a role. Together the two hypotheses explain the total effect of the difference, suggestingthat the proposed hypotheses are also the only explanatory factors that play a role.It is interesting to compare the performance of the RNN and their FFNN variants, by comparingthe results of Experiments 2 and 3. From this comparison, it is seen that for Incremental SequenceLearning, the RNN variant achieves improved performance compared to the FFNN variant, as wouldbe expected, since a FFNN cannot make use of any knowledge of the preceding part of the sequenceand is thus limited to learning a general mapping between two subsequent pen offsets pairs (dxk;dyk)and(dxk+1;dyk+1). However, it is the only method of the four to do so; for all three other methods,around the point where test performance for the RNN variants starts to deteriorate (after around4106processed sequence steps), FFNN performance continues to improve and surpasses that of theRNN variants. This suggests that Incremental Sequence Learning is the only method that is able toutilize information about the preceding part of the sequence, and thereby surpass FFNN performance.In terms of absolute performance, a strong further improvement can be obtained by using the entiretraining set, as will be seen in the next section. These results suggest that learning the earlier parts ofthe sequence first can be instrumental in sequence learning.6.2 L OSS AS A FUNCTION OF SEQUENCE POSITIONTo further analyze why variation of the sequence length has a particularly strong effect on sequencelearning, we evaluate how the relative difficulty of learning a sequence step relates to the positionwithin the sequence. To do so, we measure the average loss contribution of the points or steps withina sequence as a function of their position within the sequence, as obtained with a learning method that10Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 10 20 30 40 50 60Experiment 3: FFNN, point−based batch sizeTest error, average of 10 runsNumber of sequence steps processedRMSERegular sequence learningIncremental sequence learningIncremental number of classesIncremental number of sequencesBest test performance forregular sequence learning0.0 0.2 0.4 0.6 0.8 1.0Figure 7: Comparison of the test error of the four methods, averaged over ten runs.learns entire sequences (no incremental learning), averaged over the first hundred epochs of training.Figure 8 shows the results.0 10 20 30 40 50 60−50 −40 −30 −20 −10 0Loss vs. sequence positionPosition in sequenceLossFigure 8: The figure shows the average loss contribution of the points or steps within a sequenceas a function of their position within the sequence (see text). The first steps are fundamentallyunpredictable. Once some context has been received, the loss for the next steps steeply drops. Lateron in the sequence however, the loss increases strongly. This effect may be explained by the fact thatthe number of possible preceding contexts increases exponentially, thus posing stronger requirementson the learning system for steps later on in the sequence, and/or by the point that later parts of thesequences can only be learned adequately once earlier parts have been learned first, as later steps candepend on any of the earlier steps.11Under review as a conference paper at ICLR 2017The first steps are fundamentally unpredictable as the network cannot know which example it willreceive next; accordingly, at the start of the sequence, the error is high, as the method cannot knowin advance what the shape or digit class of the new sequence will be. Once the first steps of thesequence have been received and the context increasingly narrows down the possibilities, the loss forthe prediction of the next steps steeply drops. Subsequently however, as the position in the sequenceadvances, the loss increases strongly, and exceeds the initial uncertainty of the first steps. This effectmay be explained by the fact that the number of possible preceding contexts increases exponentially,thus posing stronger requirements on the learning system for steps later on in the sequence.6.3 R ESULTS ON THE FULL MNIST P ENSTROKE SEQUENCE DATA SETThe results reported so far were based on a subset of 10000 training sequences and 5000 testsequences, in order to complete a sufficient number of runs for each of the experiments within areasonable amount of time. Given the positive results obtained with Incremental Sequence Learning,we now train this method on the full MNIST Pen Stroke Sequence Data Set, consisting of 60000training sequences and 10000 test sequences (Experiment 4). In these experiments, a batch size of500 sequences instead of 50 is used.Figure 9 shows the results. Compared to the performance of the above experiments, a strongimprovement is obtained by training on this larger set of examples; whereas the best test error inthe results above was slightly above 1.5, the test performance for this experiment drops below one;a test error of 0.972 on the full test data set is obtained. An strking finding is that while initiallythe test error is much larger than the train error, the test error continues to improve for a long time,and approaches the training error very closely; in other words, no overtraining is observed even forrelatively long runs where the training performance appears to be nearly converged.6.4 T RANSFER LEARNINGThe first task considered here was to perform sequence learning: predicting step t+1 of a sequencegiven step t. To adequately perform this task, the network must learn to detect which digit it is beingfed; the initial part of a sequence representing a 2 or 3 for example is very similar, but as evidence isgrowing that the current sequence represents a 3, that information is vital in predicting how the strokewill continue.Given that the network is expected to have built up some representation of what digit it is reading, aninteresting test is to see whether it is able to switch to the task of sequence classification . The inputpresentation remains the same: at every time step, the recurrent neural network is fed one step ofthe sequence of pen movements representing the strokes of a digit. However, we now also read theoutput of the 10 binary class variable outputs. The target for these is a one-hot representation of thedigit, i.e. the target value for the output corresponding to the digit is one, and all nine other targetvalues are zero. To obtain the output, softmax is used, and the sequence classification loss LCfor theclassification outputs is the cross entropy, weighted by a factor = 10 :LC= 1NNXn=1[ynlog^yn+ (1yn)log(1^yn)]!In the following experiments, the loss consists of the sequence classification loss LC, to whichoptionally the earlier sequence prediction loss LPis added, regulated by a binary parameter :L=LC+LPThe network is asked for a prediction of the digit class after each step it receives. Clearly, accurateclassification is impossible during the first part of a sequence; before the first point is received, thesequence could represent any of the 10 digits with equal probability. As the sequence is received stepby step however, the network receives more information. The prediction produced after receiving theone-but-last step of the sequence, i.e. at the point where the network was previously asked to predictthe last step, is used as its final answer for predicting the digit class.We compare the following variants:12Under review as a conference paper at ICLR 20170e+00 2e+06 4e+06 6e+06 8e+06 1e+070 50 100 150Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08 2.5e+08 3.0e+081 2 3 4 5 6 7Experiment 4: RNN on full MNIST Pen Stroke Sequence Data SetSequence−based batch sizeNumber of sequence steps processedRMSETest errorTraining error1 2 3 4 5 6 7Figure 9: Performance on full MNIST Pen Stroke Sequence Data Set, zoomed to first part of the runand same experiment, results for the full run.13Under review as a conference paper at ICLR 2017Transfer learning: sequence classification and sequence predictionStarting from a trained sequence prediction model as obtained in Experiment 4, the earlierloss function is augmented with the sequence classification loss: L=LC+LPTransfer Learning: sequence classification onlyStarting from a trained sequence prediction model, the loss function is switched such that itonly reflects the classification performance, and no longer tracks the sequence predictionperformance:L=LCLearning from scratch, sequence classification and sequence predictionIn this variant, learning starts from scratch, and both classification loss and prediction lossare used, as in the first experiment: L=LC+LPLearning from scratch, sequence classification onlyL=LC0e+00 2e+07 4e+07 6e+07 8e+070.00.20.40.60.81.0Experiment 5: Transfer learningfrom sequence prediction to sequence classificationNumber of sequence steps processedFraction of correct predictionsTransfer Learning, classification onlyTransfer learning, classification and predictionLearn from scratch, classification onlyLearn from scratch, classification and prediction0.150.350.550.750.95Figure 10: Using the sequence prediction model as a starting point for sequence classification: startingfrom a trained sequence prediction network, the task is switched to predicting the class of the digit(red and black lines). A comparison with learning a digit classification model from scratch (blue andgreen lines) shows that the internal state built up to predict sequence steps is helpful in predicting theclass of the digit represented by the sequence.Figure 10 shows the results; indeed the network is able to build further on its ability to predict penstroke sequences, and learns the sequence classification task faster and more accurately than anidentical network that learns the sequence classification task from scratch; in this first and straight-forward transfer learning experiment based on the MNIST stroke sequence data set, a classificationaccuracy of 96.0% is reached1. We note that performance on the MNIST sequence data cannot becompared to results obtained with the original MNIST data set, as the information in the input data isvastly reduced. This result sets a first baseline for the MNIST stroke sequence data set; we expectthere is ample room for improvement. Simultaneously learning sequence prediction and sequenceclassification does not appear to provide an advantage, neither for transfer learning nor for learningfrom scratch.1This performance was reached after training for 7107sequence steps, i.e. roughly twice as long as the runshown in the chart14Under review as a conference paper at ICLR 20177 G ENERATIVE RESULTSTo gain insight into what the network has learned, in this section we report examples of output of thenetwork.7.1 D EVELOPMENT DURING TRAININGDuring training, the network receives each sequence step by step, and after each step, it outputs itsexpectation of the offset of the next point. In these figures and movies, we visualize the predictionsof the network for a given sequence at different stages of the training process. All results have beenobtained from a single run of Incremental Sequence Learning.After 80 batches After 140 batches After 530 batches After 570 batches After 650 batchesFigure 11: Movie showing what the network has learned over time. The movie shows the output forthree sequences of the test data at different stages during training. To view, click the image or visitthis link: https://edwin-de-jong.github.io/blog/isl/rnn-movies/generative-rnn-training-movie.gif .7.2 U NGUIDED OUTPUT GENERATION ,A.K.A.NEURAL NETWORK HALLUCINATIONAfter training, the trained network can be used to generate output independently. The guidance that ispresent during training in the form of receiving each next step of the sequence following a predictionis not available here. Instead, the output produced by the network is fed back into the network as itsnext input, see Figures 12 and 13. Figure 14 shows example results.Figure 12: Training: the target of a trainingstep is used as the next input.Figure 13: Generation: the output of the net-work is used as the next input.15Under review as a conference paper at ICLR 2017Output resembling a 2 Output resembling a 3 Output resembling a 4Figure 14: Unguided output of the network: after each step, the network’s output is fed back as thenext input. Clearly, the network has learned the ability to independently produce long sequencesrepresenting different digits that occurred in training data.7.3 S EQUENCE CLASSIFICATIONThe third analysis of the behavior the trained network is to view what happens during sequenceclassification. At each step of the sequence, we monitor the ten class outputs and visualize theiroutput. As more steps of the sequence are being received, the network receives more information,and adjusts its expectation of what digit class the sequence represents.●●MNIST stroke sequence test image 25●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 0. Ini-tially, as the downward part ofthe curved stroke is being re-ceived, the network believesthe sequences represents a 4.After passing the lowest pointof the figure, it assigns higherlikelihood to a 6. Only at thevery end, just in time beforethe sequence ends, the predic-tion of the network switchesfor the last time, and a highprobability is assigned to thecorrect class.●●MNIST stroke sequence test image 18●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●10203040500 2 4 6 8Classification outputSequence stepDigit classClassification output for a se-quence representing a 3. Ini-tially, the networks estimatesthe sequence to represent a 7.Next, it expects a 2 is morelikely. After 20 points havebeen received, it concludes(correctly) that the sequencesrepresents a 3.●●MNIST stroke sequence test image 62●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●51015202530350 2 4 6 8Classification outputSequence stepDigit classClassification output for asequence representing a 9.While receiving the sequence,the dominant prediction of thenetwork is that the sequencerepresents a five; the openloop of the 9 and the straighttop line may contribute to this.When the last points are re-ceived, the network considera 9 to be more likely, but someambiguity remains.8 C ONCLUSIONSThere are many possible ways to apply the principles of incremental or curriculum learning tosequence learning, but so far a general understanding of which forms of curriculum sequence learninghave a positive effect is missing. We have investigated a particular approach to sequence learningwhere the training data is initially limited to the first few steps of each sequence. Gradually, as the16Under review as a conference paper at ICLR 2017network learns to predict the early parts of the sequences, the length of the part of the sequences usedfor training is increased. We name this approach Incremental Sequence Learning, and find that itstrongly improves sequence learning performance. Two other forms of curriculum sequence learningused for comparison did not display improvements compared to regular sequence learning. Theorigins of this performance improvement are analyzed in comparison experiments, as detailed below.A first observation was that with Incremental Sequence Learning, the time required to attain the besttest performance level of regular sequence learning was much lower; on average, the method reachedthis level twenty times faster, thus achieving a significant speedup and reduction of the computationalcost of sequence learning. More importantly, Incremental Sequence Learning was found to reducethe test error of regular sequence learning by 74%.To analyze the cause of the observed speedup and performance improvements, we first increasethe number of sequences per batch for Incremental Sequence Learning, so that all methods use thesame number of sequence steps per batch. This reduced the speedup, but the improvement of thegeneralization performance was maintained. We then replaced the RNN layers with feed forwardnetwork layers, so that the networks can no longer maintain information about the earlier part ofthe sequences. This completely removed the remaining advantage. This provides clear evidencethat the improvement in generalization performance is due to the specific ability of an RNN tobuild up internal representations of the sequences it receives, and that the ability to develop theserepresentations is aided by training on the early parts of sequences first.Next, we trained Incremental Sequence Learning on the full MNIST stroke sequence data set, andfound that the use of this larger training set further improves sequence prediction performance. Thetrained model was then used as a starting point for transfer learning, where the task was switchedfrom sequence prediction to sequence classification .We conclude that Incremental Sequence Learning provides a simple and easily applicable approachto sequence learning that was found to produce large improvements in both computation time andgeneralization performance. The dependency of later steps in a sequence on the preceding steps ischaracteristic of virtually all sequence learning problems. We therefore expect that this approach canyield improvements for sequence learning applications in general, and recommend its usage, giventhat exclusively positive results were obtained with the approach so far.9 R ESOURCESThe Tensorflow implementation that was used to perform these experiments is available here: https://github.com/edwin-de-jong/incremental-sequence-learningThe MNIST stroke sequence data set is available for download here: https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/MNIST-digits-stroke-sequence-dataThe code for transforming the MNIST digit data set to a pen strokesequence data set has also been made available: https://github.com/edwin-de-jong/mnist-digits-as-stroke-sequences/wiki/MNIST-digits-as-stroke-sequences-(code)ACKNOWLEDGMENTSThe author would like to thank Max Welling, Dick de Ridder and Michiel de Jong for valuablecomments and suggestions on earlier versions.REFERENCESBengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. (2015). Scheduled sampling for sequenceprediction with recurrent neural networks. In Proceedings of the 28th International Conference onNeural Information Processing Systems , NIPS’15, pages 1171–1179, Cambridge, MA, USA. MITPress.17Under review as a conference paper at ICLR 2017Bengio, Y ., Courville, A., and Vincent, P. (2013). Representation learning: A review and newperspectives. IEEE Trans. Pattern Anal. Mach. Intell. , 35(8):1798–1828.Bengio, Y ., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedingsof the 26th Annual International Conference on Machine Learning , ICML ’09, pages 41–48, NewYork, NY , USA. ACM.Bishop, C. (1994). Mixture density networks. Technical Report NCRG/94/0041, Aston University.Caruana, R. (1997). Multitask learning. Mach. Learn. , 28(1):41–75.Ciresan, D. C., Meier, U., and Schmidhuber, J. (2012). Multi-column deep neural networks for imageclassification. CoRR , abs/1202.2745.de Jong, E. D. and Oates, T. (2002). A coevolutionary approach to representation development.Proceedings of the ICML-2002 Workshop on Development of Representations , pages 1–6.Elman, J. L. (1991). Incremental learning, or the importance of starting small. crl technical report9101. Technical report, University of California, San Diego.Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small.Cognition , 48:781–99.Giraud-Carrier, C. (2000). A note on the utility of incremental learning. AI Commun. , 13(4):215–223.Graves, A. (2013). Generating sequences with recurrent neural networks. CoRR , abs/1308.0850.He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. CoRR ,abs/1512.03385.Hinton, G. E. and Nair, V . (2005). Inferring motor programs from images of handwritten digits. InAdvances in Neural Information Processing Systems 18 [Neural Information Processing Systems,NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada] , pages 515–522.Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation , 9(8):1735–1780.Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batchtraining for deep learning: Generalization gap and sharp minima. CoRR , abs/1609.04836.LeCun, Y . and Cortes, C. (2010). MNIST handwritten digit database.Lipton, Z. C. (2015). A critical review of recurrent neural networks for sequence learning. CoRR ,abs/1506.00019.Moriarty, D. E. (1997). Symbiotic Evolution Of Neural Networks In Sequential Decision Tasks . PhDthesis, Department of Computer Sciences, The University of Texas at Austin. Technical ReportUT-AI97-257.Parisotto, E., Ba, L. J., and Salakhutdinov, R. (2015). Actor-mimic: Deep multitask and transferreinforcement learning. CoRR , abs/1511.06342.Pratt, L. Y . (1993). Discriminability-based transfer between neural networks. In Advances in NeuralInformation Processing Systems 5, [NIPS Conference] , pages 204–211, San Francisco, CA, USA.Morgan Kaufmann Publishers Inc.Rusu, A. A., Vecerik, M., Rothörl, T., Heess, N., Pascanu, R., and Hadsell, R. (2016). Sim-to-realrobot learning from pixels with progressive nets. arxiv:1610.04286 [cs.ro]. Technical report, DeepMind.Schlimmer, J. C. and Granger, R. H. (1986). Incremental learning from noisy data. Machine Learning ,1(3):317–354.Schmidt, M., Murphy, K., Fung, G., and Rosales, R. (2008). Structure learning in random fields forheart motion abnormality detection. In In CVPR .18Under review as a conference paper at ICLR 2017Sun, R. and Giles, C. L. (2001). Sequence learning: from recognition and prediction to sequentialdecision making. IEEE Intelligent Systems , 16(4):67–70.Thrun, S. (1996). Is learning the n-th thing any easier than learning the first. In Advances in NeuralInformation Processing Systems , volume 8, pages 640–646.Zaremba, W. and Sutskever, I. (2014). Learning to execute. CoRR , abs/1410.4615.Zhang, T. Y . and Suen, C. Y . (1984). A fast parallel algorithm for thinning digital patterns. Commun.ACM , 27(3):236–239.19
rJJHggGNx
rJq_YBqxx
ICLR.cc/2017/conference/-/paper261/official/review
{"title": "Good paper, accept", "rating": "7: Good paper, accept", "review": "The paper presents one of the first neural translation systems that operates purely at the character-level, another one being https://arxiv.org/abs/1610.03017 , which can be considered a concurrent work. The system is rather complicated and consists of a lot of recurrent networks. The quantitative results are quite good and the qualitative results are quite encouraging.\n\nFirst, a few words about the quality of presentation. Despite being an expert in the area, it is hard for me to be sure that I exactly understood what is being done. The Subsections 3.1 and 3.2 sketch two main features of the architecture at a rather high-level. For example, does the RNN sentence encoder receive one vector per word as input or more? Figure 2 suggests that it\u2019s just one. The notation h_t is overloaded, used in both Subsection 3.1 and 3.2 with clearly different meaning. An Appendix that explains unambiguously how the model works would be in order. Also, the approach appears to be limited by its reliance on the availability of blanks between words, a trait which not all languages possess.\n\nSecond, the results seem to be quite good. However, no significant improvement over bpe2char systems is reported. Also, I would be curious to know how long it takes to train such a model, because from the description it seems like the model would be very slow to train (400 steps of BiNNN). On a related note, normally an ablation test is a must for such papers, to show that the architectural enhancements applied were actually necessary. I can imagine that this would take a lot of GPU time for such a complex model.\n\nOn the bright side, Figure 3 presents some really interesting properties that of the embeddings that the model learnt. Likewise interesting is Figure 5.\n\nTo conclude, I think that this an interesting application paper, but the execution quality could be improved. I am ready to increase my score if an ablation test confirms that the considered encoder is better than a trivial baseline, that e.g. takes the last hidden state for each RNN. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Character-Level Neural Machine Translation By Learning Morphology
["Shenjian Zhao", "Zhihua Zhang"]
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=rJq_YBqxx
https://openreview.net/pdf?id=rJq_YBqxx
https://openreview.net/forum?id=rJq_YBqxx&noteId=rJJHggGNx
Under review as a conference paper at ICLR 2017DEEPCHARACTER -LEVEL NEURAL MACHINETRANSLATION BYLEARNING MORPHOLOGYShenjian ZhaoDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai 200240, Chinasword.york@gmail.comZhihua ZhangSchool of Mathematical SciencesPeking UniversityBeijing 100871, Chinazhzhang@math.pku.edu.cnABSTRACTNeural machine translation aims at building a single large neural network that canbe trained to maximize translation performance. The encoder-decoder architecturewith an attention mechanism achieves a translation performance comparable to theexisting state-of-the-art phrase-based systems. However, the use of large vocabularybecomes the bottleneck in both training and improving the performance. In thispaper, we propose a novel architecture which learns morphology by using tworecurrent networks and a hierarchical decoder which translates at character level.This gives rise to a deep character-level model consisting of six recurrent networks.Such a deep model has two major advantages. It avoids the large vocabulary issueradically; at the same time, it is more efficient in training than word-based models.Our model obtains a higher BLEU score than the bpe-based model after trainingfor one epoch on En-Fr and En-Cs translation tasks. Further analyses show thatour model is able to learn morphology.1 I NTRODUCTIONNeural machine translation (NMT) attempts to build a single large neural network that reads asentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machinetranslations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Choet al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism whichautomatically searches the alignments and greatly improves the performance. However, the use of alarge vocabulary seems necessary for the word-level neural machine translation models to improveperformance (Sutskever et al., 2014; Cho et al., 2015).Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) wordis a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling.Consider that a language itself is an evolving system. So it is impossible to cover all words in thelanguage. The problem of rare words that are out of vocabulary (OOV) is a critical issue which caneffect the performance of neural machine translation. In particular, using larger vocabulary doesimprove performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomesmuch harder and the vocabulary is often filled with many similar words that share a lexeme but havedifferent morphology.There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehreet al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information oftarget unknown words, after which simple word dictionary lookup or identity copy can be performedto replace the unknown words in translation. However, these approaches ignore several importantproperties of languages such as monolinguality and crosslinguality as pointed out by Luong and1Under review as a conference paper at ICLR 2017Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translationmodel which leverages the power of both words and characters to achieve the goal of open vocabularyneural machine translation.Intuitively, it is elegant to directly model pure characters. However, as the length of sequencegrows significantly, character-level translation models have failed to produce competitive resultscompared with word-based models. In addition, they require more memory and computation resource.Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a)proposed a compositional character to word (C2W) model and applied it to machine translation (Linget al., 2015b). They also used a hierarchical decoder which has been explored before in other context(Serban et al., 2015). However, they found it slow and difficult to train the character-level models, andone has to resort to layer-wise training the neural network and applying supervision for the attentioncomponent. In fact, such RNNs often struggle with separating words that have similar morphologiesbut very different meanings.In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting thestructure of words. It is built on two recurrent neural networks: one for learning the representationof preceding characters and another for learning the weight of this representation of the wholeword. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al.,2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al.,2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. Todecode at character level, we devise a hierarchical decoder which sets the state of the second-levelRNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which willgenerate a character sequence until generating a delimiter. In this way, our model almost keeps thesame encoding length for encoder as word-based models but eliminates the use of a large vocabulary.Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks,achieving higher performance.In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence-> target word -> target character) to train a deep character-level neural machine translator. We showthat the model achieves a high translation performance which is comparable to the state-of-the-artneural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experimentsand analyses further support the statement that our model is able to learn the morphology.2 N EURAL MACHINE TRANSLATIONNeural machine translation is often implemented as an encoder-decoder architecture. The encoderusually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN)(Schuster and Paliwal, 1997) to encode the input sentence x=fx1;:::;x Txginto a sequence ofhidden states h=fh1;:::;hTxg:ht=f1(e(xt);ht1);where e(xt)2Rmis anm-dimensional embedding of xt. The decoder, another RNN, is oftentrained to predict next word ytgiven previous predicted words fy1;:::;y t1gand the context vectorct; that is,p(ytjfy1;:::;y t1g) =g(e(yt1);st;ct);wherest=f2(e(yt1);st1;ct) (1)andgis a nonlinear and potentially multi-layered function that computes the probability of yt. Thecontext ctdepends on the sequence of fh1;:::;hTxg. Sutskever et al. (2014) encoded all informationin the source sentence into a fixed-length vector, i.e., ct=hTx. Bahdanau et al. (2015) computed ctby the alignment model which handles the bottleneck that the former approach meets.The whole model is jointly trained by maximizing the conditional log-probability of the correcttranslation given a source sentence with respect to the parameters of the model := argmaxTyXt=1logp(ytjfy1;:::;y t1g;x;):For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al.,2014; Bahdanau et al., 2015).2Under review as a conference paper at ICLR 20173 D EEPCHARACTER -LEVEL NEURAL MACHINE TRANSLATIONWe consider two problems in the word-level neural machine translation models. First, how canwe map a word to a vector? It is usually done by a lookup table (embedding matrix) where thesize of vocabulary is limited. Second, how do we map a vector to a word when predicting? It isusually done via a softmax function. However, the large vocabulary will make the softmax intractablecomputationally.We correspondingly devise two novel architectures, a word encoder which utilizes the morphologyand a hierarchical decoder which decodes at character level. Accordingly, we propose a deepcharacter-level neural machine translation model (DCNMT).3.1 L EARNING MORPHOLOGY IN A WORD ENCODERMany words can be subdivided into smaller meaningful units called morphemes, such as “any-one”,“any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognizedas grammatically significant or meaningful. Different combinations of morphemes lead to differentmeanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rulesof how they are combined. Even if the word encoder had never seen “everything” before, with aunderstanding of English morphology, the word encoder could gather the meaning easily. Thuslearning morphology in a word encoder might speedup training.Figure 1: The representation of theword ’anyone.’The word encoder is based on two recurrent neural networks,as illustrated in Figure 1. We compute the representation of theword ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rtis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters.The weightwtof each representation rtis computed bywt= exp( aff(ht));where htis another RNN hidden state at time tandaff()isan affine function which maps htto a scalar. Here, we use aBiRNN to compute htas shown in Figure 1. Instead of nor-malizing it byPtexp( aff(ht)), we use an activation functiontanh as it performs best in experiments.We can regard the weight wias the energy that determines whether riis a representation of amorpheme and how it contributes to the representation of the word. Compared with an embeddinglookup table, the decoupled RNNs learn the representation of morphemes and the rules of how theyare combined respectively, which may be viewed as learning distributed representations of wordsexplicitly. For example, we are able to translate “convenienter” correctly which validates our idea.After obtaining the representation of the word, we could encode the sentence using a bidirectionalRNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.3.2 H IERARCHICAL DECODERTo decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similarto RNNsearch which contains the information of the target word. Specifically, stin Eqn. (1)containsthe information of target word at time t. Instead of using a multi-layer network following a softmaxfunction to compute the probability of each target word using st, we employ a second-level decoderwhich generates a character sequence based on st.We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that usedin the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter3Under review as a conference paper at ICLR 2017and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state andgenerates character sequence based on the given state until generating a delimiter. In our model, thestate is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it willset the state to the next output of the first-level decoder. Given the previous output character sequencefy0;y1;:::;y t1gwherey0is a token representing the start of sentence, and the auxiliary sequencefa0;a1;:::;a t1gwhich only contains 0 and 1 to indicate whether yiis a delimiter ( a0is set to 1),HGRU updates the state as follows:gt1= (1at1)gt1+at1sit; (2)qjt=([Wqe(yt1)]j+ [Uqgt1]j); (3)zjt=([Wze(yt1)]j+ [Uzgt1]j); (4)~gjt=([We(yt1)]j+ [U(qtgt1)]j); (5)gjt=zjtgjt1+ (1zjt)~gjt; (6)where sitis the output of the first-level decoder which calculated as Eqn. (8). We can compute theprobability of each target character ytbased on gtwith a softmax function:p(ytjfy1;:::;y t1g;x) =softmax (gt): (7)The current problem is that the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) andother symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016)uses two forward passes (one for word-level and another for character-level) in batch training whichis less efficient. However, in our model, we use a matrix to unfold the outputs of the first-leveldecoder, which makes the batch training process more efficient. It is a TyTmatrix R, whereTyisthe number of delimiter (number of words) in the target character sequence and Tis the length ofthe target character sequence. R[i;j1+ 1] toR[i;j2]are set as 1 if j1is the index of the (i1)-thdelimiter and j2is the index of the i-th delimiter in the target character sequence. The index of the0-th delimiter is set as 0. For example, when the target output is “ go!” and the output of thefirst-level decoder is [s1;s2], the unfolding step will be:[s1;s2]1 1 1 0 00 0 0 1 1= [s1;s1;s1;s2;s2];thereforefsi1;si2;si3;si4;si5gis correspondingly set to fs1;s1;s1;s2;s2gin HGRU iterations.After this procedure, we can compute the probability of each target character by the second-leveldecoder according to Eqns. (2) to (7).3.3 M ODEL ARCHITECTURESThere are totally six recurrent neural networks in our model, which can be divided into four layers asshown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neuralmachine translation. It is possible to use multi-layer recurrent neural networks to make the modeldeeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. Thesecond layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al.,2015). The third layer is the first-level decoder. It takes the representation of previous target wordas a feedback, which is produced by the target word encoder in our model. As the feedback is lessimportant, we use an ordinary RNN to encode the target word. The feedback rYt1then combines theprevious hidden state ut1and the context ctfrom the sentence encoder to generate the vector st:st=W1ct+W2rYt1+W3ut1+b: (8)With the state of HGRU in the second-level decoder setting to stand the information of previousgenerated character, the second-level decoder generates the next character until generating an end ofsentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train ourcharacter-level neural translation model perfectly well in an end-to-end fashion.4Under review as a conference paper at ICLR 2017Figure 2: Deep character-level neural machine translation. The HGRUs with red border indicate thatthe state should be set to the output of the first-level decoder.3.4 G ENERATION PROCEDUREWe first encode the source sequence as in the training procedure, then we generate the target sequencecharacter by character based on the output stof the first-level decoder. Once we generate a delimiter,we should compute next vector st+1according to Eqn. (8)by combining feedback rYtfrom the targetword encoder, the context ct+1from the sentence encoder and the hidden state ut. The generationprocedure will terminate once an end of sentence (EOS) token is produced.4 E XPERIMENTSWe implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (vanMerriënboer et al., 2015), the source code and the trained models are available at github1. We trainour model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to-French translation task where the languages are morphologically poor. For fair comparison, weuse the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACLWMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech andCzech-to-English translation tasks where Czech is a morphologically rich language. We use the samedataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.4.1 D ATASETWe use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usualtokenization. We choose a list of 120 most frequent characters for each language which coveres nearly100% of the training data. Those characters not included in the list are mapped to a special token1https://github.com/SwordYork/DCNMT2http://www.statmt.org/wmt15/translation-task.html5Under review as a conference paper at ICLR 2017(<unk>). We use newstest2013 (Dev) as the development set and evaluate the models on newstest2015(Test). We do not use any monolingual corpus.4.2 T RAINING DETAILSWe follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentenceencoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units;We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have512 hidden units.We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train eachmodel (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 103and thenannealed to 104.We use a beam search to find a translation that approximately maximizes the conditional log-probability which is a commonly used approach in neural machine translation (Sutskever et al., 2014;Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level togenerate a translation.5 R ESULT AND ANALYSISWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks inSection 5.1. Apart from measuring translation quality, we analyze the efficiency of our model andeffects of character-level modeling in more details.5.1 Q UANTITATIVE RESULTSWe illustrate the efficiency of the deep character-level neural machine translation by comparing withthe bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measurethe performance by BLEU score (Papineni et al., 2002).Table 1: BLEU scores of different models on three language pairs.Model Size Src Trgt Length Epochs Days Dev TestEn-Frbpe2bpe(1)- bpe bpe 50 50 - - 26.91 29.70C2W(2)54M char char 300 3002:827 25.89 27.04CNMT52M char char 300 3003:821 28.19 29.38DCNMT54M char char 300 3001727.02 28.132:819 29.31 30.56En-Csbpe2bpe(1)- bpe bpe 50 50 - - 15.90 13.84bpe2char(3)- bpe char 50 500 - - - 16.86char(5)- char char 600 600 >490 - 17.5hybrid(5)250M hybrid hybrid 50 50 >421 - 19.6DCNMT54M char char 450 4501515.50 14.872:915 17.89 16.96Cs-Enbpe2bpe(1)- bpe bpe 50 50 - - 21.24 20.32bpe2char(3)76M bpe char 50 5006:114 23.27 22.42char2char(4)69M char char 450 4507:930 23.38 22.46DCNMT54M char char 450 4501520.50 19.754:622 23.24 22.48In Table 1, “Length” indicates the maximum sentence length in training (based on the number ofwords or characters), “Size” is the total number of parameters in the models. We report the BLEU6Under review as a conference paper at ICLR 2017scores of DCNMT when trained after one epoch in the above line and the final scores in the followingline. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Leeet al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Linget al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNNto encode source words (takes the last hidden state). The training time for (3) and (4) is calculatedbased on the training speed in (Lee et al., 2016). For each test set, the best scores among the modelsper language pair are bold-faced. Obviously, character-level models are better than the subword-levelmodels, and our model is comparable to the start-of-the-art character-level models. Note that, thepurely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0:5BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2.Besides, our model is the simplest and the smallest one in terms of the model size.5.2 L EARNING MORPHOLOGY-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.5← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility(a) ordinary RNN word encoder-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.52← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility (b) our word encoderFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words.In this section, we investigate whether our model could learn morphology. First we want to figure outthe difference between an ordinary RNN word encoder and our word encoder. We choose some wordswith similar meaning but different in morphology as shown in Figure 3. We could find in Figure3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, arejammed together. In contrast, the representations produced by our encoder are more reasonable andthe words with similar meaning are closer.anyever yanyever yanyever yanyever yanyever ywa yonebodyth in gw her ewa yonebodyth in gw her e00.020.040.060.080.10.120.140.160.180.2(a) energy of each character-0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2-0.15-0.1-0.0500.050.10.15← anybody← anyway← anyone← anything← anywhere← everybody← everyway← everyone← everything← everywhere (b) two-dimensional PCA projectionFigure 4: The learnt morphemesThen we analyze how our word encoder learns morphemes and the rules of how they are combined.We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of eachcharacter, more precisely, the energy of preceding characters. We could see that the last characterof a morpheme will result a relative large energy (weight) like “any” and “every” in these words.Moreover, even the preceding characters are different, it will produce a similar weight for the samemorpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure7Under review as a conference paper at ICLR 20174(b) further validates our idea. The word encoder may be able to guess the meaning of “everything”even it had never seen “everything” before, thus speedup learning. More interestingly, we find thatnot only the ending letter has high energy, but also the beginning letter is important. It matches thebehavior of human perception (White et al., 2008).peakenergy consum er s m ay w ant t o m ove t hei r t el ephones a l i t t l e cl oser t o t he t v set <unk> <unk> w at chi ng peakenergy abc ' s m onday ni ght f oot bal l can now vot e dur i ng <unk> f or t he gr eat est pl ay i n N year s f r om peakenergy am ong f our or f i ve <unk> <unk> t w o w eeks ago vi ew er s of sever al nbc <unk> consum er segm ent s Figure 5: Subword-level boundary detected by our word encoder.Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b),we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”,“monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and“great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect thesubword units.5.3 B ENEFITING FROM LEARNING MORPHOLOGYAs analyzed in Section 5.2, learning morphology could speedup learning. This has also been shownin Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for oneepoch, the obtained result even outperforms the final result with bpe baseline.Another advantage of our model is the ability to translate the misspelled words or the nonce words.The character-level model has a much better chance recovering the original word or sentence. InTable 2, we list some examples where the source sentences are taken from newstest2013 but wechange some words to misspelled words or nonce words. We also list the translations from Googletranslate3and online demo of neural machine translation by LISA.Table 2: Sample translations.(a) Misspelled wordsSource For the time being howeve their research is unconclusive .Reference Leurs recherches ne sont toutefois pas concluantes pour l’instant.Google translate Pour le moment, leurs recherches ne sont pas concluantes .LISA Pour le moment UNK leur recherche est UNK .DCNMT Pour le moment, cependant , leur recherche n’est pas concluante .(b) Nonce words (morphological change)Source Then we will be able to supplement the real world with virtual objects ina much convenienter form .Reference Ainsi , nous pourrons compléter le monde réel par des objets virtuelsdans une forme plus pratique .Google translate Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .LISA Ensuite, nous serons en mesure de compléter le vrai monde avec desobjets virtuels sous une forme bien UNK .DCNMT Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For aword-based translator, it is never possible because the misspelled words are mapped into <unk>3The translations by Google translate were made on Nov 4, 2016.8Under review as a conference paper at ICLR 2017token before translating. Thus, it will produce an <unk> token or just take the word from sourcesentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate“convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get thecomparative adjective form of “convenient” which never appears in the training set; however, ourmodel guessed it correctly based on the morphemes and the rules.6 C ONCLUSIONIn this paper we have proposed an hierarchical architecture to train the deep character-level neuralmachine translation model by introducing a novel word encoder and a multi-leveled decoder. We havedemonstrated the efficiency of the training process and the effectiveness of the model in comparisonwith the word-level and other character-level models. The BLEU score implies that our deep character-level neural machine translation model likely outperforms the word-level models and is competitivewith the state-of-the-art character-based models. It is possible to further improve performance byusing deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longersentence pairs.As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue thatword-level models suffer from, and we have obtained a new functionality to translate the misspelled orthe nonce words. More importantly, the deep character-level is able to learn the similar embedding ofthe words with similar meanings like the word-level models. Finally, it would be potentially possiblethat the idea behind our approach could be applied to many other tasks such as speech recognitionand text summarization.REFERENCESIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in Neural Information Processing Systems , pages 3104–3112, 2014.Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder forstatistical machine translation. Proceedings of the 2014 Conference on Empirical Methods inNatural Language Processing , 2014.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. International Conference on Learning Representation , 2015.Sébastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very largetarget vocabulary for neural machine translation. Proceedings of the 53rd Annual Meeting of theAssociation for Computational Linguistics , 2015.Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicitsegmentation for neural machine translation. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016a.Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointing theunknown words. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressingthe rare word problem in neural machine translation. Proceedings of the 53rd Annual Meeting ofthe Association for Computational Linguistics , 2015.Minh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machinetranslation with hybrid word-character models. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016.Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan WBlack, and Isabel Trancoso. Finding function in form: Compositional character models for openvocabulary word representation. Empirical Methods in Natural Language Processing , 2015a.9Under review as a conference paper at ICLR 2017Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. Character-based neural machinetranslation. arXiv preprint arXiv:1511.04586 , 2015b.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Hierar-chical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808 ,2015.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words withsubword units. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languagemodels. Association for the Advancement of Artificial Intelligence , 2016.Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translationwithout explicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing,IEEE Transactions on , 45(11):2673–2681, 1997.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, ArnaudBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, GuillaumeDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPUmath expression compiler. In Proceedings of the Python for Scientific Computing Conference(SciPy) , June 2010. Oral Presentation.Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley,Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXivpreprint arXiv:1506.00619 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InternationalConference on Learning Representation , 2015.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. pages 311–318. Association for Computational Linguistics,2002.Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. Multi-way, multilingual neural machine translationwith a shared attention mechanism. In Proceedings of the 2016 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human Language Technologies. , 2016.Sarah J White, Rebecca L Johnson, Simon P Liversedge, and Keith Rayner. Eye movements whenreading transposed text: the importance of word-beginning letters. Journal of ExperimentalPsychology: Human Perception and Performance , 34(5):1261, 2008.Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks.arXiv preprint arXiv:1609.01704 , 2016b.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation sys-tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 ,2016.10Under review as a conference paper at ICLR 2017A D ETAILED DESCRIPTION OF THE MODELHere we describe the implementation using Theano, it should be applicable to other symbolic deeplearning frameworks. We use fto denote the transition of the recurrent network.A.1 S OURCE WORD ENCODERAs illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We computethe representation of the word ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rt2Rnis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters. The weight wtof each representationrtis computed bywt= exp( Wwht+bw);where Ww2R12lmaps the vector ht2R2lto a scalar and htis the state of the BiRNN at time t:ht=" !ht ht#: (9) !ht2Rlis the forward state of the BiRNN which is computed by !ht=f(e(xt); !ht1): (10)The backward state ht2Rlis computed similarly, however in a reverse order.A.2 S OURCE SENTENCE ENCODERAfter encoding the words by the source word encoder, we feed the representations to thesource sentence encoder. For example, the source “Hello world </s>” is encoded into a vector[rHello;rworld;r</s>], then the BiRNN sentence encoder encodes this vector into [v1;v2;v3]. The com-putation is the same as Eqn. (9)and Eqn. (10), however the input now changes to the representationof the words.A.3 F IRST-LEVEL DECODERThe first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism.Given the context vector ctfrom encoder, the hidden state ut2Rmof the GRU is computed byut= (1zt)ut1+zt~ut;where~ut=tanh(WrYt1+U[qtut1] +Cct)zt=(WzrYt1+Uzut1+Czct)qt=(WqrYt1+Uqut1+Cqct):rYt1is the representation of the target word which is produced by an ordinary RNN (take the laststate). The context vector ctis computed by the attention mechanism at each step:ct=TxXj=1tjvj;11Under review as a conference paper at ICLR 2017wheretj=exp(etj)PTxk=1exp(etk)etj=Etanh(Weut1+Hehj):E2R1mwhich maps the vector into a scalar. Then the hidden state utis further processed asEqn. (8) before feeding to the second-level decoder:st+1=W1ct+1+W2rYt+W3ut+b:A.4 S ECOND -LEVEL DECODERAs described in Section 3.2, the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and othersymbolic deep learning frameworks to build symbolic expressions). We use a matrix R2RTyTto unfold the outputs [s1;:::;sTy]of the first-level decoder ( Tyis the number of words in the targetsentence and Tis the number of characters). Ris a symbolic matrix in the final loss, it is constructedaccording the delimiters in the target sentences when training (see Section 3.2 for the detailedconstruction, note that Ris a tensor in batch training. ). After unfolding, the input of HGRU becomes[si1;:::;siT], that is[si1;:::;siT] = [s1;:::;sTy]R:According to Eqns.(2) to (7), we can compute the probability of each target character :p(ytjfy1;:::;y t1g;x) =softmax (gt):Finally, we could compute the cross-entroy loss and train with SGD algorithm.B S AMPLE TRANSLATIONSWe show additional sample translations in the following Tables.Table 3: Sample translations of En-Fr.Source This " disturbance " produces an electromagnetic wave ( of light , infrared, ultraviolet etc . ) , and this wave is nothing other than a photon - andthus one of the " force carrier " bosons .Reference Quand , en effet , une particule ayant une charge électrique accélère ouchange de direction , cela " dérange " le champ électromagnétique en cetendroit précis , un peu comme un caillou lancé dans un étang .DCNMT Lorsque , en fait , une particule ayant une charge électrique accélère ouchange de direction , cela " perturbe " le champ électromagnétique danscet endroit spécifique , plutôt comme un galet jeté dans un étang .Source Since October , a manifesto , signed by palliative care luminaries includ-ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating todemonstrate their opposition to such an initiative .Reference Depuis le mois d’ octobre , un manifeste , signé de sommités des soinspalliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circulepour témoigner de leur opposition à une telle initiative .DCNMT Depuis octobre , un manifeste , signé par des liminaires de soins palliatifs, dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circulé pourdémontrer leur opposition à une telle initiative .12Under review as a conference paper at ICLR 2017Table 4: Sample translations of En-Cs.Source French troops have left their area of responsibility in Afghanistan (Kapisa and Surobi ) .Reference Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surobi ) .DCNMT Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surois ) .Source " All the guests were made to feel important and loved " recalls the topmodel , who started working with him during Haute Couture Week Paris, in 1995 .Reference Všichni pozvaní se díky n ˇemu mohli cítit d ̊ uležití a milovaní , " vzpomínátop modelka , která s ním za ˇcala pracovat v pr ̊ ub ˇehu Pa ˇrížského týdnevrcholné módy v roce 1995 .DCNMT " Všichni hosté byli provedeni , aby se cítili d ̊ uležití a milovaní "pˇripomíná nejvyšší model , který s ním za ˇcal pracovat v pr ̊ ub ˇehu tý-deníku Haute Coutupe v Pa ˇríži v roce 1995 .Source " There are so many private weapons factories now , which do not endurecompetition on the international market and throw weapons from underthe counter to the black market , including in Moscow , " says the expert.Reference " V sou ˇcasnosti vznikají soukromé zbroja ˇrské podniky , které nejsoukonkurenceschopné na mezinárodním trhu , a vy ˇrazují zbran ˇe , kterédodávají na ˇcerný trh v ˇcetnˇe Moskvy , " ˇríká tento odborník .DCNMT " V sou ˇcasnosti existuje tolik soukromých zbraní , které nevydržíhospodá ˇrskou sout ˇež na mezinárodním trhu a hodí zbran ˇe pod pultem kˇcernému trhu , v ˇcetnˇe Moskvy , " ˇríká odborník .Table 5: Sample translations of Cs-En.Source Prezident Karzáí nechce zahrani ˇcní kontroly , zejména ne p ˇri pˇríležitostivoleb plánovaných na duben 2014 .Reference President Karzai does not want any foreign controls , particularly on theoccasion of the elections in April 2014 .DCNMT President Karzai does not want foreign controls , particularly in theopportunity of elections planned on April 2014 .Source Manželský pár m ˇel dv ˇe dˇeti , Prestona a Heidi , a dlouhou dobu žili vkalifornském m ˇestˇe Malibu , kde pobývá mnoho celebrit .Reference The couple had two sons , Preston and Heidi , and lived for a long timein the Californian city Malibu , home to many celebrities .DCNMT The married couple had two children , Preston and Heidi , and long livedin the California city of Malibu , where many celebrities resided .Source Trestný ˇcin rouhání je zachován a urážka je nadále zakázána , což bymohlo mít vážné d ̊ usledky pro svobodu vyjad ˇrování , zejména pak protisk .Reference The offence of blasphemy is maintained and insults are now prohibited, which could have serious consequences on freedom of expression ,particularly for the press .DCNMT The criminal action of blasphemy is maintained and insult is still prohib-ited , which could have serious consequences for freedom of expression ,especially for the press .13
BJKwHefNl
rJq_YBqxx
ICLR.cc/2017/conference/-/paper261/official/review
{"title": "Well-executed paper with good analysis but little novelty", "rating": "5: Marginally below acceptance threshold", "review": "Update after reading the authors' responses & the paper revision dated Dec 21:\nI have removed the comment \"insufficient comparison to past work\" in the title & update the score from 3 -> 5.\nThe main reason for the score is on novelty. The proposal of HGRU & the use of the R matrix are basically just to achieve the effect of \"whether to continue from character-level states or using word-level states\". It seems that these solutions are specific to symbolic frameworks like Theano (which the authors used) and TensorFlow. This, however, is not a problem for languages like Matlab (which Luong & Manning used) or Torch.\n\n-----\n\nThis is a well-written paper with good analysis in which I especially like Figure 5. However I think there is little novelty in this work. The title is about learning morphology but there is nothing specifically enforced in the model to learn morphemes or subword units. For example, maybe some constraints can be put on the weights in w_i in Figure 1 to detect morpheme boundaries or some additional objective like MDL can be used (though it's not clear how these constraints can be incorporated cleanly). \n\nMoreover, I'm very surprised that litte comparison (only a brief mention) was given to the work of (Luong & Manning, 2016) [1], which trains deep 8-layer word-character models and achieves much better results on English-Czech, e.g., 19.6 BLEU compared to 17.0 BLEU achieved in the paper. I think the HGRU thing is over-complicated in terms of presentation. If I read correctly, what HGRU does is basically either continue the character decoder or reset using word-level states at boundaries, which is what was done in [1]. Luong & Manning (2016) even make it more efficient by not having to decode all target words at the morpheme level & it would be good to know the speed of the model proposed in this ICLR submission. What end up new in this paper are perhaps different analyses on what a character-based model learns & adding an additional RNN layer in the encoder.\n\nOne minor comment: annotate h_t in Figure 1.\n\n[1] Minh-Thang Luong and Christopher D. Manning. 2016. Achieving Open Vocabulary Neural Machine Translation\nwith Hybrid Word-Character Models. ACL. https://arxiv.org/pdf/1604.00788v2.pdf", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Deep Character-Level Neural Machine Translation By Learning Morphology
["Shenjian Zhao", "Zhihua Zhang"]
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=rJq_YBqxx
https://openreview.net/pdf?id=rJq_YBqxx
https://openreview.net/forum?id=rJq_YBqxx&noteId=BJKwHefNl
Under review as a conference paper at ICLR 2017DEEPCHARACTER -LEVEL NEURAL MACHINETRANSLATION BYLEARNING MORPHOLOGYShenjian ZhaoDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai 200240, Chinasword.york@gmail.comZhihua ZhangSchool of Mathematical SciencesPeking UniversityBeijing 100871, Chinazhzhang@math.pku.edu.cnABSTRACTNeural machine translation aims at building a single large neural network that canbe trained to maximize translation performance. The encoder-decoder architecturewith an attention mechanism achieves a translation performance comparable to theexisting state-of-the-art phrase-based systems. However, the use of large vocabularybecomes the bottleneck in both training and improving the performance. In thispaper, we propose a novel architecture which learns morphology by using tworecurrent networks and a hierarchical decoder which translates at character level.This gives rise to a deep character-level model consisting of six recurrent networks.Such a deep model has two major advantages. It avoids the large vocabulary issueradically; at the same time, it is more efficient in training than word-based models.Our model obtains a higher BLEU score than the bpe-based model after trainingfor one epoch on En-Fr and En-Cs translation tasks. Further analyses show thatour model is able to learn morphology.1 I NTRODUCTIONNeural machine translation (NMT) attempts to build a single large neural network that reads asentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machinetranslations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Choet al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism whichautomatically searches the alignments and greatly improves the performance. However, the use of alarge vocabulary seems necessary for the word-level neural machine translation models to improveperformance (Sutskever et al., 2014; Cho et al., 2015).Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) wordis a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling.Consider that a language itself is an evolving system. So it is impossible to cover all words in thelanguage. The problem of rare words that are out of vocabulary (OOV) is a critical issue which caneffect the performance of neural machine translation. In particular, using larger vocabulary doesimprove performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomesmuch harder and the vocabulary is often filled with many similar words that share a lexeme but havedifferent morphology.There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehreet al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information oftarget unknown words, after which simple word dictionary lookup or identity copy can be performedto replace the unknown words in translation. However, these approaches ignore several importantproperties of languages such as monolinguality and crosslinguality as pointed out by Luong and1Under review as a conference paper at ICLR 2017Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translationmodel which leverages the power of both words and characters to achieve the goal of open vocabularyneural machine translation.Intuitively, it is elegant to directly model pure characters. However, as the length of sequencegrows significantly, character-level translation models have failed to produce competitive resultscompared with word-based models. In addition, they require more memory and computation resource.Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a)proposed a compositional character to word (C2W) model and applied it to machine translation (Linget al., 2015b). They also used a hierarchical decoder which has been explored before in other context(Serban et al., 2015). However, they found it slow and difficult to train the character-level models, andone has to resort to layer-wise training the neural network and applying supervision for the attentioncomponent. In fact, such RNNs often struggle with separating words that have similar morphologiesbut very different meanings.In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting thestructure of words. It is built on two recurrent neural networks: one for learning the representationof preceding characters and another for learning the weight of this representation of the wholeword. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al.,2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al.,2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. Todecode at character level, we devise a hierarchical decoder which sets the state of the second-levelRNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which willgenerate a character sequence until generating a delimiter. In this way, our model almost keeps thesame encoding length for encoder as word-based models but eliminates the use of a large vocabulary.Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks,achieving higher performance.In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence-> target word -> target character) to train a deep character-level neural machine translator. We showthat the model achieves a high translation performance which is comparable to the state-of-the-artneural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experimentsand analyses further support the statement that our model is able to learn the morphology.2 N EURAL MACHINE TRANSLATIONNeural machine translation is often implemented as an encoder-decoder architecture. The encoderusually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN)(Schuster and Paliwal, 1997) to encode the input sentence x=fx1;:::;x Txginto a sequence ofhidden states h=fh1;:::;hTxg:ht=f1(e(xt);ht1);where e(xt)2Rmis anm-dimensional embedding of xt. The decoder, another RNN, is oftentrained to predict next word ytgiven previous predicted words fy1;:::;y t1gand the context vectorct; that is,p(ytjfy1;:::;y t1g) =g(e(yt1);st;ct);wherest=f2(e(yt1);st1;ct) (1)andgis a nonlinear and potentially multi-layered function that computes the probability of yt. Thecontext ctdepends on the sequence of fh1;:::;hTxg. Sutskever et al. (2014) encoded all informationin the source sentence into a fixed-length vector, i.e., ct=hTx. Bahdanau et al. (2015) computed ctby the alignment model which handles the bottleneck that the former approach meets.The whole model is jointly trained by maximizing the conditional log-probability of the correcttranslation given a source sentence with respect to the parameters of the model := argmaxTyXt=1logp(ytjfy1;:::;y t1g;x;):For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al.,2014; Bahdanau et al., 2015).2Under review as a conference paper at ICLR 20173 D EEPCHARACTER -LEVEL NEURAL MACHINE TRANSLATIONWe consider two problems in the word-level neural machine translation models. First, how canwe map a word to a vector? It is usually done by a lookup table (embedding matrix) where thesize of vocabulary is limited. Second, how do we map a vector to a word when predicting? It isusually done via a softmax function. However, the large vocabulary will make the softmax intractablecomputationally.We correspondingly devise two novel architectures, a word encoder which utilizes the morphologyand a hierarchical decoder which decodes at character level. Accordingly, we propose a deepcharacter-level neural machine translation model (DCNMT).3.1 L EARNING MORPHOLOGY IN A WORD ENCODERMany words can be subdivided into smaller meaningful units called morphemes, such as “any-one”,“any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognizedas grammatically significant or meaningful. Different combinations of morphemes lead to differentmeanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rulesof how they are combined. Even if the word encoder had never seen “everything” before, with aunderstanding of English morphology, the word encoder could gather the meaning easily. Thuslearning morphology in a word encoder might speedup training.Figure 1: The representation of theword ’anyone.’The word encoder is based on two recurrent neural networks,as illustrated in Figure 1. We compute the representation of theword ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rtis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters.The weightwtof each representation rtis computed bywt= exp( aff(ht));where htis another RNN hidden state at time tandaff()isan affine function which maps htto a scalar. Here, we use aBiRNN to compute htas shown in Figure 1. Instead of nor-malizing it byPtexp( aff(ht)), we use an activation functiontanh as it performs best in experiments.We can regard the weight wias the energy that determines whether riis a representation of amorpheme and how it contributes to the representation of the word. Compared with an embeddinglookup table, the decoupled RNNs learn the representation of morphemes and the rules of how theyare combined respectively, which may be viewed as learning distributed representations of wordsexplicitly. For example, we are able to translate “convenienter” correctly which validates our idea.After obtaining the representation of the word, we could encode the sentence using a bidirectionalRNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.3.2 H IERARCHICAL DECODERTo decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similarto RNNsearch which contains the information of the target word. Specifically, stin Eqn. (1)containsthe information of target word at time t. Instead of using a multi-layer network following a softmaxfunction to compute the probability of each target word using st, we employ a second-level decoderwhich generates a character sequence based on st.We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that usedin the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter3Under review as a conference paper at ICLR 2017and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state andgenerates character sequence based on the given state until generating a delimiter. In our model, thestate is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it willset the state to the next output of the first-level decoder. Given the previous output character sequencefy0;y1;:::;y t1gwherey0is a token representing the start of sentence, and the auxiliary sequencefa0;a1;:::;a t1gwhich only contains 0 and 1 to indicate whether yiis a delimiter ( a0is set to 1),HGRU updates the state as follows:gt1= (1at1)gt1+at1sit; (2)qjt=([Wqe(yt1)]j+ [Uqgt1]j); (3)zjt=([Wze(yt1)]j+ [Uzgt1]j); (4)~gjt=([We(yt1)]j+ [U(qtgt1)]j); (5)gjt=zjtgjt1+ (1zjt)~gjt; (6)where sitis the output of the first-level decoder which calculated as Eqn. (8). We can compute theprobability of each target character ytbased on gtwith a softmax function:p(ytjfy1;:::;y t1g;x) =softmax (gt): (7)The current problem is that the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) andother symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016)uses two forward passes (one for word-level and another for character-level) in batch training whichis less efficient. However, in our model, we use a matrix to unfold the outputs of the first-leveldecoder, which makes the batch training process more efficient. It is a TyTmatrix R, whereTyisthe number of delimiter (number of words) in the target character sequence and Tis the length ofthe target character sequence. R[i;j1+ 1] toR[i;j2]are set as 1 if j1is the index of the (i1)-thdelimiter and j2is the index of the i-th delimiter in the target character sequence. The index of the0-th delimiter is set as 0. For example, when the target output is “ go!” and the output of thefirst-level decoder is [s1;s2], the unfolding step will be:[s1;s2]1 1 1 0 00 0 0 1 1= [s1;s1;s1;s2;s2];thereforefsi1;si2;si3;si4;si5gis correspondingly set to fs1;s1;s1;s2;s2gin HGRU iterations.After this procedure, we can compute the probability of each target character by the second-leveldecoder according to Eqns. (2) to (7).3.3 M ODEL ARCHITECTURESThere are totally six recurrent neural networks in our model, which can be divided into four layers asshown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neuralmachine translation. It is possible to use multi-layer recurrent neural networks to make the modeldeeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. Thesecond layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al.,2015). The third layer is the first-level decoder. It takes the representation of previous target wordas a feedback, which is produced by the target word encoder in our model. As the feedback is lessimportant, we use an ordinary RNN to encode the target word. The feedback rYt1then combines theprevious hidden state ut1and the context ctfrom the sentence encoder to generate the vector st:st=W1ct+W2rYt1+W3ut1+b: (8)With the state of HGRU in the second-level decoder setting to stand the information of previousgenerated character, the second-level decoder generates the next character until generating an end ofsentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train ourcharacter-level neural translation model perfectly well in an end-to-end fashion.4Under review as a conference paper at ICLR 2017Figure 2: Deep character-level neural machine translation. The HGRUs with red border indicate thatthe state should be set to the output of the first-level decoder.3.4 G ENERATION PROCEDUREWe first encode the source sequence as in the training procedure, then we generate the target sequencecharacter by character based on the output stof the first-level decoder. Once we generate a delimiter,we should compute next vector st+1according to Eqn. (8)by combining feedback rYtfrom the targetword encoder, the context ct+1from the sentence encoder and the hidden state ut. The generationprocedure will terminate once an end of sentence (EOS) token is produced.4 E XPERIMENTSWe implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (vanMerriënboer et al., 2015), the source code and the trained models are available at github1. We trainour model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to-French translation task where the languages are morphologically poor. For fair comparison, weuse the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACLWMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech andCzech-to-English translation tasks where Czech is a morphologically rich language. We use the samedataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.4.1 D ATASETWe use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usualtokenization. We choose a list of 120 most frequent characters for each language which coveres nearly100% of the training data. Those characters not included in the list are mapped to a special token1https://github.com/SwordYork/DCNMT2http://www.statmt.org/wmt15/translation-task.html5Under review as a conference paper at ICLR 2017(<unk>). We use newstest2013 (Dev) as the development set and evaluate the models on newstest2015(Test). We do not use any monolingual corpus.4.2 T RAINING DETAILSWe follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentenceencoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units;We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have512 hidden units.We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train eachmodel (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 103and thenannealed to 104.We use a beam search to find a translation that approximately maximizes the conditional log-probability which is a commonly used approach in neural machine translation (Sutskever et al., 2014;Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level togenerate a translation.5 R ESULT AND ANALYSISWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks inSection 5.1. Apart from measuring translation quality, we analyze the efficiency of our model andeffects of character-level modeling in more details.5.1 Q UANTITATIVE RESULTSWe illustrate the efficiency of the deep character-level neural machine translation by comparing withthe bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measurethe performance by BLEU score (Papineni et al., 2002).Table 1: BLEU scores of different models on three language pairs.Model Size Src Trgt Length Epochs Days Dev TestEn-Frbpe2bpe(1)- bpe bpe 50 50 - - 26.91 29.70C2W(2)54M char char 300 3002:827 25.89 27.04CNMT52M char char 300 3003:821 28.19 29.38DCNMT54M char char 300 3001727.02 28.132:819 29.31 30.56En-Csbpe2bpe(1)- bpe bpe 50 50 - - 15.90 13.84bpe2char(3)- bpe char 50 500 - - - 16.86char(5)- char char 600 600 >490 - 17.5hybrid(5)250M hybrid hybrid 50 50 >421 - 19.6DCNMT54M char char 450 4501515.50 14.872:915 17.89 16.96Cs-Enbpe2bpe(1)- bpe bpe 50 50 - - 21.24 20.32bpe2char(3)76M bpe char 50 5006:114 23.27 22.42char2char(4)69M char char 450 4507:930 23.38 22.46DCNMT54M char char 450 4501520.50 19.754:622 23.24 22.48In Table 1, “Length” indicates the maximum sentence length in training (based on the number ofwords or characters), “Size” is the total number of parameters in the models. We report the BLEU6Under review as a conference paper at ICLR 2017scores of DCNMT when trained after one epoch in the above line and the final scores in the followingline. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Leeet al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Linget al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNNto encode source words (takes the last hidden state). The training time for (3) and (4) is calculatedbased on the training speed in (Lee et al., 2016). For each test set, the best scores among the modelsper language pair are bold-faced. Obviously, character-level models are better than the subword-levelmodels, and our model is comparable to the start-of-the-art character-level models. Note that, thepurely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0:5BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2.Besides, our model is the simplest and the smallest one in terms of the model size.5.2 L EARNING MORPHOLOGY-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.5← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility(a) ordinary RNN word encoder-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.52← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility (b) our word encoderFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words.In this section, we investigate whether our model could learn morphology. First we want to figure outthe difference between an ordinary RNN word encoder and our word encoder. We choose some wordswith similar meaning but different in morphology as shown in Figure 3. We could find in Figure3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, arejammed together. In contrast, the representations produced by our encoder are more reasonable andthe words with similar meaning are closer.anyever yanyever yanyever yanyever yanyever ywa yonebodyth in gw her ewa yonebodyth in gw her e00.020.040.060.080.10.120.140.160.180.2(a) energy of each character-0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2-0.15-0.1-0.0500.050.10.15← anybody← anyway← anyone← anything← anywhere← everybody← everyway← everyone← everything← everywhere (b) two-dimensional PCA projectionFigure 4: The learnt morphemesThen we analyze how our word encoder learns morphemes and the rules of how they are combined.We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of eachcharacter, more precisely, the energy of preceding characters. We could see that the last characterof a morpheme will result a relative large energy (weight) like “any” and “every” in these words.Moreover, even the preceding characters are different, it will produce a similar weight for the samemorpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure7Under review as a conference paper at ICLR 20174(b) further validates our idea. The word encoder may be able to guess the meaning of “everything”even it had never seen “everything” before, thus speedup learning. More interestingly, we find thatnot only the ending letter has high energy, but also the beginning letter is important. It matches thebehavior of human perception (White et al., 2008).peakenergy consum er s m ay w ant t o m ove t hei r t el ephones a l i t t l e cl oser t o t he t v set <unk> <unk> w at chi ng peakenergy abc ' s m onday ni ght f oot bal l can now vot e dur i ng <unk> f or t he gr eat est pl ay i n N year s f r om peakenergy am ong f our or f i ve <unk> <unk> t w o w eeks ago vi ew er s of sever al nbc <unk> consum er segm ent s Figure 5: Subword-level boundary detected by our word encoder.Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b),we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”,“monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and“great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect thesubword units.5.3 B ENEFITING FROM LEARNING MORPHOLOGYAs analyzed in Section 5.2, learning morphology could speedup learning. This has also been shownin Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for oneepoch, the obtained result even outperforms the final result with bpe baseline.Another advantage of our model is the ability to translate the misspelled words or the nonce words.The character-level model has a much better chance recovering the original word or sentence. InTable 2, we list some examples where the source sentences are taken from newstest2013 but wechange some words to misspelled words or nonce words. We also list the translations from Googletranslate3and online demo of neural machine translation by LISA.Table 2: Sample translations.(a) Misspelled wordsSource For the time being howeve their research is unconclusive .Reference Leurs recherches ne sont toutefois pas concluantes pour l’instant.Google translate Pour le moment, leurs recherches ne sont pas concluantes .LISA Pour le moment UNK leur recherche est UNK .DCNMT Pour le moment, cependant , leur recherche n’est pas concluante .(b) Nonce words (morphological change)Source Then we will be able to supplement the real world with virtual objects ina much convenienter form .Reference Ainsi , nous pourrons compléter le monde réel par des objets virtuelsdans une forme plus pratique .Google translate Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .LISA Ensuite, nous serons en mesure de compléter le vrai monde avec desobjets virtuels sous une forme bien UNK .DCNMT Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For aword-based translator, it is never possible because the misspelled words are mapped into <unk>3The translations by Google translate were made on Nov 4, 2016.8Under review as a conference paper at ICLR 2017token before translating. Thus, it will produce an <unk> token or just take the word from sourcesentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate“convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get thecomparative adjective form of “convenient” which never appears in the training set; however, ourmodel guessed it correctly based on the morphemes and the rules.6 C ONCLUSIONIn this paper we have proposed an hierarchical architecture to train the deep character-level neuralmachine translation model by introducing a novel word encoder and a multi-leveled decoder. We havedemonstrated the efficiency of the training process and the effectiveness of the model in comparisonwith the word-level and other character-level models. The BLEU score implies that our deep character-level neural machine translation model likely outperforms the word-level models and is competitivewith the state-of-the-art character-based models. It is possible to further improve performance byusing deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longersentence pairs.As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue thatword-level models suffer from, and we have obtained a new functionality to translate the misspelled orthe nonce words. More importantly, the deep character-level is able to learn the similar embedding ofthe words with similar meanings like the word-level models. Finally, it would be potentially possiblethat the idea behind our approach could be applied to many other tasks such as speech recognitionand text summarization.REFERENCESIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in Neural Information Processing Systems , pages 3104–3112, 2014.Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder forstatistical machine translation. Proceedings of the 2014 Conference on Empirical Methods inNatural Language Processing , 2014.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. International Conference on Learning Representation , 2015.Sébastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very largetarget vocabulary for neural machine translation. Proceedings of the 53rd Annual Meeting of theAssociation for Computational Linguistics , 2015.Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicitsegmentation for neural machine translation. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016a.Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointing theunknown words. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressingthe rare word problem in neural machine translation. Proceedings of the 53rd Annual Meeting ofthe Association for Computational Linguistics , 2015.Minh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machinetranslation with hybrid word-character models. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016.Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan WBlack, and Isabel Trancoso. Finding function in form: Compositional character models for openvocabulary word representation. Empirical Methods in Natural Language Processing , 2015a.9Under review as a conference paper at ICLR 2017Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. Character-based neural machinetranslation. arXiv preprint arXiv:1511.04586 , 2015b.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Hierar-chical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808 ,2015.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words withsubword units. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languagemodels. Association for the Advancement of Artificial Intelligence , 2016.Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translationwithout explicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing,IEEE Transactions on , 45(11):2673–2681, 1997.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, ArnaudBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, GuillaumeDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPUmath expression compiler. In Proceedings of the Python for Scientific Computing Conference(SciPy) , June 2010. Oral Presentation.Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley,Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXivpreprint arXiv:1506.00619 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InternationalConference on Learning Representation , 2015.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. pages 311–318. Association for Computational Linguistics,2002.Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. Multi-way, multilingual neural machine translationwith a shared attention mechanism. In Proceedings of the 2016 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human Language Technologies. , 2016.Sarah J White, Rebecca L Johnson, Simon P Liversedge, and Keith Rayner. Eye movements whenreading transposed text: the importance of word-beginning letters. Journal of ExperimentalPsychology: Human Perception and Performance , 34(5):1261, 2008.Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks.arXiv preprint arXiv:1609.01704 , 2016b.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation sys-tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 ,2016.10Under review as a conference paper at ICLR 2017A D ETAILED DESCRIPTION OF THE MODELHere we describe the implementation using Theano, it should be applicable to other symbolic deeplearning frameworks. We use fto denote the transition of the recurrent network.A.1 S OURCE WORD ENCODERAs illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We computethe representation of the word ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rt2Rnis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters. The weight wtof each representationrtis computed bywt= exp( Wwht+bw);where Ww2R12lmaps the vector ht2R2lto a scalar and htis the state of the BiRNN at time t:ht=" !ht ht#: (9) !ht2Rlis the forward state of the BiRNN which is computed by !ht=f(e(xt); !ht1): (10)The backward state ht2Rlis computed similarly, however in a reverse order.A.2 S OURCE SENTENCE ENCODERAfter encoding the words by the source word encoder, we feed the representations to thesource sentence encoder. For example, the source “Hello world </s>” is encoded into a vector[rHello;rworld;r</s>], then the BiRNN sentence encoder encodes this vector into [v1;v2;v3]. The com-putation is the same as Eqn. (9)and Eqn. (10), however the input now changes to the representationof the words.A.3 F IRST-LEVEL DECODERThe first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism.Given the context vector ctfrom encoder, the hidden state ut2Rmof the GRU is computed byut= (1zt)ut1+zt~ut;where~ut=tanh(WrYt1+U[qtut1] +Cct)zt=(WzrYt1+Uzut1+Czct)qt=(WqrYt1+Uqut1+Cqct):rYt1is the representation of the target word which is produced by an ordinary RNN (take the laststate). The context vector ctis computed by the attention mechanism at each step:ct=TxXj=1tjvj;11Under review as a conference paper at ICLR 2017wheretj=exp(etj)PTxk=1exp(etk)etj=Etanh(Weut1+Hehj):E2R1mwhich maps the vector into a scalar. Then the hidden state utis further processed asEqn. (8) before feeding to the second-level decoder:st+1=W1ct+1+W2rYt+W3ut+b:A.4 S ECOND -LEVEL DECODERAs described in Section 3.2, the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and othersymbolic deep learning frameworks to build symbolic expressions). We use a matrix R2RTyTto unfold the outputs [s1;:::;sTy]of the first-level decoder ( Tyis the number of words in the targetsentence and Tis the number of characters). Ris a symbolic matrix in the final loss, it is constructedaccording the delimiters in the target sentences when training (see Section 3.2 for the detailedconstruction, note that Ris a tensor in batch training. ). After unfolding, the input of HGRU becomes[si1;:::;siT], that is[si1;:::;siT] = [s1;:::;sTy]R:According to Eqns.(2) to (7), we can compute the probability of each target character :p(ytjfy1;:::;y t1g;x) =softmax (gt):Finally, we could compute the cross-entroy loss and train with SGD algorithm.B S AMPLE TRANSLATIONSWe show additional sample translations in the following Tables.Table 3: Sample translations of En-Fr.Source This " disturbance " produces an electromagnetic wave ( of light , infrared, ultraviolet etc . ) , and this wave is nothing other than a photon - andthus one of the " force carrier " bosons .Reference Quand , en effet , une particule ayant une charge électrique accélère ouchange de direction , cela " dérange " le champ électromagnétique en cetendroit précis , un peu comme un caillou lancé dans un étang .DCNMT Lorsque , en fait , une particule ayant une charge électrique accélère ouchange de direction , cela " perturbe " le champ électromagnétique danscet endroit spécifique , plutôt comme un galet jeté dans un étang .Source Since October , a manifesto , signed by palliative care luminaries includ-ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating todemonstrate their opposition to such an initiative .Reference Depuis le mois d’ octobre , un manifeste , signé de sommités des soinspalliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circulepour témoigner de leur opposition à une telle initiative .DCNMT Depuis octobre , un manifeste , signé par des liminaires de soins palliatifs, dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circulé pourdémontrer leur opposition à une telle initiative .12Under review as a conference paper at ICLR 2017Table 4: Sample translations of En-Cs.Source French troops have left their area of responsibility in Afghanistan (Kapisa and Surobi ) .Reference Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surobi ) .DCNMT Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surois ) .Source " All the guests were made to feel important and loved " recalls the topmodel , who started working with him during Haute Couture Week Paris, in 1995 .Reference Všichni pozvaní se díky n ˇemu mohli cítit d ̊ uležití a milovaní , " vzpomínátop modelka , která s ním za ˇcala pracovat v pr ̊ ub ˇehu Pa ˇrížského týdnevrcholné módy v roce 1995 .DCNMT " Všichni hosté byli provedeni , aby se cítili d ̊ uležití a milovaní "pˇripomíná nejvyšší model , který s ním za ˇcal pracovat v pr ̊ ub ˇehu tý-deníku Haute Coutupe v Pa ˇríži v roce 1995 .Source " There are so many private weapons factories now , which do not endurecompetition on the international market and throw weapons from underthe counter to the black market , including in Moscow , " says the expert.Reference " V sou ˇcasnosti vznikají soukromé zbroja ˇrské podniky , které nejsoukonkurenceschopné na mezinárodním trhu , a vy ˇrazují zbran ˇe , kterédodávají na ˇcerný trh v ˇcetnˇe Moskvy , " ˇríká tento odborník .DCNMT " V sou ˇcasnosti existuje tolik soukromých zbraní , které nevydržíhospodá ˇrskou sout ˇež na mezinárodním trhu a hodí zbran ˇe pod pultem kˇcernému trhu , v ˇcetnˇe Moskvy , " ˇríká odborník .Table 5: Sample translations of Cs-En.Source Prezident Karzáí nechce zahrani ˇcní kontroly , zejména ne p ˇri pˇríležitostivoleb plánovaných na duben 2014 .Reference President Karzai does not want any foreign controls , particularly on theoccasion of the elections in April 2014 .DCNMT President Karzai does not want foreign controls , particularly in theopportunity of elections planned on April 2014 .Source Manželský pár m ˇel dv ˇe dˇeti , Prestona a Heidi , a dlouhou dobu žili vkalifornském m ˇestˇe Malibu , kde pobývá mnoho celebrit .Reference The couple had two sons , Preston and Heidi , and lived for a long timein the Californian city Malibu , home to many celebrities .DCNMT The married couple had two children , Preston and Heidi , and long livedin the California city of Malibu , where many celebrities resided .Source Trestný ˇcin rouhání je zachován a urážka je nadále zakázána , což bymohlo mít vážné d ̊ usledky pro svobodu vyjad ˇrování , zejména pak protisk .Reference The offence of blasphemy is maintained and insults are now prohibited, which could have serious consequences on freedom of expression ,particularly for the press .DCNMT The criminal action of blasphemy is maintained and insult is still prohib-ited , which could have serious consequences for freedom of expression ,especially for the press .13
H1aZfRINx
rJq_YBqxx
ICLR.cc/2017/conference/-/paper261/official/review
{"title": "A well written paper", "rating": "6: Marginally above acceptance threshold", "review": "\n* Summary: This paper proposes a neural machine translation model that translates the source and the target texts in an end to end manner from characters to characters. The model can learn morphology in the encoder and in the decoder the authors use a hierarchical decoder. Authors provide very compelling results on various bilingual corpora for different language pairs. The paper is well-written, the results are competitive compared to other baselines in the literature.\n\n\n* Review:\n - I think the paper is very well written, I like the analysis presented in this paper. It is clean and precise. \n - The idea of using hierarchical decoders have been explored before, e.g. [1]. Can you cite those papers?\n - This paper is mainly an application paper and it is mainly the application of several existing components on the character-level NMT tasks. In this sense, it is good that authors made their codes available online. However, the contributions from the general ML point of view is still limited.\n \n* Some Requests:\n -Can you add the size of the models to the Table 1? \n- Can you add some of the failure cases of your model, where the model failed to translate correctly?\n\n* An Overview of the Review:\n\nPros:\n - The paper is well written\n - Extensive analysis of the model on various language pairs\n - Convincing experimental results. \n \nCons:\n - The model is complicated.\n - Mainly an architecture engineering/application paper(bringing together various well-known techniques), not much novelty.\n - The proposed model is potentially slower than the regular models since it needs to operate over the characters instead of the words and uses several RNNs.\n\n[1] Serban IV, Sordoni A, Bengio Y, Courville A, Pineau J. Hierarchical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808. 2015 Jul 17.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Deep Character-Level Neural Machine Translation By Learning Morphology
["Shenjian Zhao", "Zhihua Zhang"]
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models. Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=rJq_YBqxx
https://openreview.net/pdf?id=rJq_YBqxx
https://openreview.net/forum?id=rJq_YBqxx&noteId=H1aZfRINx
Under review as a conference paper at ICLR 2017DEEPCHARACTER -LEVEL NEURAL MACHINETRANSLATION BYLEARNING MORPHOLOGYShenjian ZhaoDepartment of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai 200240, Chinasword.york@gmail.comZhihua ZhangSchool of Mathematical SciencesPeking UniversityBeijing 100871, Chinazhzhang@math.pku.edu.cnABSTRACTNeural machine translation aims at building a single large neural network that canbe trained to maximize translation performance. The encoder-decoder architecturewith an attention mechanism achieves a translation performance comparable to theexisting state-of-the-art phrase-based systems. However, the use of large vocabularybecomes the bottleneck in both training and improving the performance. In thispaper, we propose a novel architecture which learns morphology by using tworecurrent networks and a hierarchical decoder which translates at character level.This gives rise to a deep character-level model consisting of six recurrent networks.Such a deep model has two major advantages. It avoids the large vocabulary issueradically; at the same time, it is more efficient in training than word-based models.Our model obtains a higher BLEU score than the bpe-based model after trainingfor one epoch on En-Fr and En-Cs translation tasks. Further analyses show thatour model is able to learn morphology.1 I NTRODUCTIONNeural machine translation (NMT) attempts to build a single large neural network that reads asentence and outputs a translation (Sutskever et al., 2014). Most of the extant neural machinetranslations models belong to a family of word-level encoder-decoders (Sutskever et al., 2014; Choet al., 2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism whichautomatically searches the alignments and greatly improves the performance. However, the use of alarge vocabulary seems necessary for the word-level neural machine translation models to improveperformance (Sutskever et al., 2014; Cho et al., 2015).Chung et al. (2016a) listed three reasons behind the wide adoption of word-level modeling: (i) wordis a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling.Consider that a language itself is an evolving system. So it is impossible to cover all words in thelanguage. The problem of rare words that are out of vocabulary (OOV) is a critical issue which caneffect the performance of neural machine translation. In particular, using larger vocabulary doesimprove performance (Sutskever et al., 2014; Cho et al., 2015). However, the training becomesmuch harder and the vocabulary is often filled with many similar words that share a lexeme but havedifferent morphology.There are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehreet al. (2016); Luong et al. (2015); Cho et al. (2015) proposed to obtain the alignment information oftarget unknown words, after which simple word dictionary lookup or identity copy can be performedto replace the unknown words in translation. However, these approaches ignore several importantproperties of languages such as monolinguality and crosslinguality as pointed out by Luong and1Under review as a conference paper at ICLR 2017Manning (2016). Thus, Luong and Manning (2016) proposed a hybrid neural machine translationmodel which leverages the power of both words and characters to achieve the goal of open vocabularyneural machine translation.Intuitively, it is elegant to directly model pure characters. However, as the length of sequencegrows significantly, character-level translation models have failed to produce competitive resultscompared with word-based models. In addition, they require more memory and computation resource.Especially, it is much difficult to train the attention component. For example, Ling et al. (2015a)proposed a compositional character to word (C2W) model and applied it to machine translation (Linget al., 2015b). They also used a hierarchical decoder which has been explored before in other context(Serban et al., 2015). However, they found it slow and difficult to train the character-level models, andone has to resort to layer-wise training the neural network and applying supervision for the attentioncomponent. In fact, such RNNs often struggle with separating words that have similar morphologiesbut very different meanings.In order to address the issues mentioned earlier, we introduce a novel architecture by exploiting thestructure of words. It is built on two recurrent neural networks: one for learning the representationof preceding characters and another for learning the weight of this representation of the wholeword. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al.,2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al.,2016; Lee et al., 2016), our model is able to generate a meaningful representation of the word. Todecode at character level, we devise a hierarchical decoder which sets the state of the second-levelRNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which willgenerate a character sequence until generating a delimiter. In this way, our model almost keeps thesame encoding length for encoder as word-based models but eliminates the use of a large vocabulary.Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks,achieving higher performance.In summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence-> target word -> target character) to train a deep character-level neural machine translator. We showthat the model achieves a high translation performance which is comparable to the state-of-the-artneural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experimentsand analyses further support the statement that our model is able to learn the morphology.2 N EURAL MACHINE TRANSLATIONNeural machine translation is often implemented as an encoder-decoder architecture. The encoderusually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN)(Schuster and Paliwal, 1997) to encode the input sentence x=fx1;:::;x Txginto a sequence ofhidden states h=fh1;:::;hTxg:ht=f1(e(xt);ht1);where e(xt)2Rmis anm-dimensional embedding of xt. The decoder, another RNN, is oftentrained to predict next word ytgiven previous predicted words fy1;:::;y t1gand the context vectorct; that is,p(ytjfy1;:::;y t1g) =g(e(yt1);st;ct);wherest=f2(e(yt1);st1;ct) (1)andgis a nonlinear and potentially multi-layered function that computes the probability of yt. Thecontext ctdepends on the sequence of fh1;:::;hTxg. Sutskever et al. (2014) encoded all informationin the source sentence into a fixed-length vector, i.e., ct=hTx. Bahdanau et al. (2015) computed ctby the alignment model which handles the bottleneck that the former approach meets.The whole model is jointly trained by maximizing the conditional log-probability of the correcttranslation given a source sentence with respect to the parameters of the model := argmaxTyXt=1logp(ytjfy1;:::;y t1g;x;):For the detailed description of the implementation, we refer the reader to the papers (Sutskever et al.,2014; Bahdanau et al., 2015).2Under review as a conference paper at ICLR 20173 D EEPCHARACTER -LEVEL NEURAL MACHINE TRANSLATIONWe consider two problems in the word-level neural machine translation models. First, how canwe map a word to a vector? It is usually done by a lookup table (embedding matrix) where thesize of vocabulary is limited. Second, how do we map a vector to a word when predicting? It isusually done via a softmax function. However, the large vocabulary will make the softmax intractablecomputationally.We correspondingly devise two novel architectures, a word encoder which utilizes the morphologyand a hierarchical decoder which decodes at character level. Accordingly, we propose a deepcharacter-level neural machine translation model (DCNMT).3.1 L EARNING MORPHOLOGY IN A WORD ENCODERMany words can be subdivided into smaller meaningful units called morphemes, such as “any-one”,“any-thing” and “every-one.” At the basic level, words are made of morphemes which are recognizedas grammatically significant or meaningful. Different combinations of morphemes lead to differentmeanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rulesof how they are combined. Even if the word encoder had never seen “everything” before, with aunderstanding of English morphology, the word encoder could gather the meaning easily. Thuslearning morphology in a word encoder might speedup training.Figure 1: The representation of theword ’anyone.’The word encoder is based on two recurrent neural networks,as illustrated in Figure 1. We compute the representation of theword ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rtis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters.The weightwtof each representation rtis computed bywt= exp( aff(ht));where htis another RNN hidden state at time tandaff()isan affine function which maps htto a scalar. Here, we use aBiRNN to compute htas shown in Figure 1. Instead of nor-malizing it byPtexp( aff(ht)), we use an activation functiontanh as it performs best in experiments.We can regard the weight wias the energy that determines whether riis a representation of amorpheme and how it contributes to the representation of the word. Compared with an embeddinglookup table, the decoupled RNNs learn the representation of morphemes and the rules of how theyare combined respectively, which may be viewed as learning distributed representations of wordsexplicitly. For example, we are able to translate “convenienter” correctly which validates our idea.After obtaining the representation of the word, we could encode the sentence using a bidirectionalRNN as RNNsearch (Bahdanau et al., 2015). The detailed architecture is shown in Figure 2.3.2 H IERARCHICAL DECODERTo decode at the character level, we introduce a hierarchical decoder. The first-level decoder is similarto RNNsearch which contains the information of the target word. Specifically, stin Eqn. (1)containsthe information of target word at time t. Instead of using a multi-layer network following a softmaxfunction to compute the probability of each target word using st, we employ a second-level decoderwhich generates a character sequence based on st.We proposed a variant of the gate recurrent unit (GRU) (Cho et al., 2014; Chung et al., 2014) that usedin the second-level decoder and we denote it as HGRU (It is possible to use the LSTM (Hochreiter3Under review as a conference paper at ICLR 2017and Schmidhuber, 1997) units instead of the GRU described here). HGRU has a settable state andgenerates character sequence based on the given state until generating a delimiter. In our model, thestate is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it willset the state to the next output of the first-level decoder. Given the previous output character sequencefy0;y1;:::;y t1gwherey0is a token representing the start of sentence, and the auxiliary sequencefa0;a1;:::;a t1gwhich only contains 0 and 1 to indicate whether yiis a delimiter ( a0is set to 1),HGRU updates the state as follows:gt1= (1at1)gt1+at1sit; (2)qjt=([Wqe(yt1)]j+ [Uqgt1]j); (3)zjt=([Wze(yt1)]j+ [Uzgt1]j); (4)~gjt=([We(yt1)]j+ [U(qtgt1)]j); (5)gjt=zjtgjt1+ (1zjt)~gjt; (6)where sitis the output of the first-level decoder which calculated as Eqn. (8). We can compute theprobability of each target character ytbased on gtwith a softmax function:p(ytjfy1;:::;y t1g;x) =softmax (gt): (7)The current problem is that the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) andother symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016)uses two forward passes (one for word-level and another for character-level) in batch training whichis less efficient. However, in our model, we use a matrix to unfold the outputs of the first-leveldecoder, which makes the batch training process more efficient. It is a TyTmatrix R, whereTyisthe number of delimiter (number of words) in the target character sequence and Tis the length ofthe target character sequence. R[i;j1+ 1] toR[i;j2]are set as 1 if j1is the index of the (i1)-thdelimiter and j2is the index of the i-th delimiter in the target character sequence. The index of the0-th delimiter is set as 0. For example, when the target output is “ go!” and the output of thefirst-level decoder is [s1;s2], the unfolding step will be:[s1;s2]1 1 1 0 00 0 0 1 1= [s1;s1;s1;s2;s2];thereforefsi1;si2;si3;si4;si5gis correspondingly set to fs1;s1;s1;s2;s2gin HGRU iterations.After this procedure, we can compute the probability of each target character by the second-leveldecoder according to Eqns. (2) to (7).3.3 M ODEL ARCHITECTURESThere are totally six recurrent neural networks in our model, which can be divided into four layers asshown in Figure 2. Figure 2 illustrates the training procedure of a basic deep character-level neuralmachine translation. It is possible to use multi-layer recurrent neural networks to make the modeldeeper. The first layer is a source word encoder which contains two RNNs as shown in Figure 1. Thesecond layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al.,2015). The third layer is the first-level decoder. It takes the representation of previous target wordas a feedback, which is produced by the target word encoder in our model. As the feedback is lessimportant, we use an ordinary RNN to encode the target word. The feedback rYt1then combines theprevious hidden state ut1and the context ctfrom the sentence encoder to generate the vector st:st=W1ct+W2rYt1+W3ut1+b: (8)With the state of HGRU in the second-level decoder setting to stand the information of previousgenerated character, the second-level decoder generates the next character until generating an end ofsentence token (denoted as </s> in Figure 2). With such a hierarchical architecture, we can train ourcharacter-level neural translation model perfectly well in an end-to-end fashion.4Under review as a conference paper at ICLR 2017Figure 2: Deep character-level neural machine translation. The HGRUs with red border indicate thatthe state should be set to the output of the first-level decoder.3.4 G ENERATION PROCEDUREWe first encode the source sequence as in the training procedure, then we generate the target sequencecharacter by character based on the output stof the first-level decoder. Once we generate a delimiter,we should compute next vector st+1according to Eqn. (8)by combining feedback rYtfrom the targetword encoder, the context ct+1from the sentence encoder and the hidden state ut. The generationprocedure will terminate once an end of sentence (EOS) token is produced.4 E XPERIMENTSWe implement the model using Theano (Bergstra et al., 2010; Bastien et al., 2012) and Blocks (vanMerriënboer et al., 2015), the source code and the trained models are available at github1. We trainour model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to-French translation task where the languages are morphologically poor. For fair comparison, weuse the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACLWMT’14. In order to show the strengths of our model, we conduct on the English-to-Czech andCzech-to-English translation tasks where Czech is a morphologically rich language. We use the samedataset as (Chung et al., 2016a; Lee et al., 2016) which is provided by ACL WMT’152.4.1 D ATASETWe use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usualtokenization. We choose a list of 120 most frequent characters for each language which coveres nearly100% of the training data. Those characters not included in the list are mapped to a special token1https://github.com/SwordYork/DCNMT2http://www.statmt.org/wmt15/translation-task.html5Under review as a conference paper at ICLR 2017(<unk>). We use newstest2013 (Dev) as the development set and evaluate the models on newstest2015(Test). We do not use any monolingual corpus.4.2 T RAINING DETAILSWe follow (Bahdanau et al., 2015) to use similar hyperparameters. The bidirectional RNN sentenceencoder and the hierarchical decoder both consists of two-layer RNNs, each has 1024 hidden units;We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have512 hidden units.We use the ADAM optimizer (Kingma and Ba, 2015) with minibatch of 56 sentences to train eachmodel (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 103and thenannealed to 104.We use a beam search to find a translation that approximately maximizes the conditional log-probability which is a commonly used approach in neural machine translation (Sutskever et al., 2014;Bahdanau et al., 2015). In our DCNMT model, it is reasonable to search directly on character level togenerate a translation.5 R ESULT AND ANALYSISWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks inSection 5.1. Apart from measuring translation quality, we analyze the efficiency of our model andeffects of character-level modeling in more details.5.1 Q UANTITATIVE RESULTSWe illustrate the efficiency of the deep character-level neural machine translation by comparing withthe bpe-based subword model (Sennrich et al., 2016) and other character-level models. We measurethe performance by BLEU score (Papineni et al., 2002).Table 1: BLEU scores of different models on three language pairs.Model Size Src Trgt Length Epochs Days Dev TestEn-Frbpe2bpe(1)- bpe bpe 50 50 - - 26.91 29.70C2W(2)54M char char 300 3002:827 25.89 27.04CNMT52M char char 300 3003:821 28.19 29.38DCNMT54M char char 300 3001727.02 28.132:819 29.31 30.56En-Csbpe2bpe(1)- bpe bpe 50 50 - - 15.90 13.84bpe2char(3)- bpe char 50 500 - - - 16.86char(5)- char char 600 600 >490 - 17.5hybrid(5)250M hybrid hybrid 50 50 >421 - 19.6DCNMT54M char char 450 4501515.50 14.872:915 17.89 16.96Cs-Enbpe2bpe(1)- bpe bpe 50 50 - - 21.24 20.32bpe2char(3)76M bpe char 50 5006:114 23.27 22.42char2char(4)69M char char 450 4507:930 23.38 22.46DCNMT54M char char 450 4501520.50 19.754:622 23.24 22.48In Table 1, “Length” indicates the maximum sentence length in training (based on the number ofwords or characters), “Size” is the total number of parameters in the models. We report the BLEU6Under review as a conference paper at ICLR 2017scores of DCNMT when trained after one epoch in the above line and the final scores in the followingline. The results of other models are taken from (1)Firat et al. (2016), (3)Chung et al. (2016a), (4)Leeet al. (2016) and (5)Luong and Manning (2016) respectively, except (2) is trained according to Linget al. (2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNNto encode source words (takes the last hidden state). The training time for (3) and (4) is calculatedbased on the training speed in (Lee et al., 2016). For each test set, the best scores among the modelsper language pair are bold-faced. Obviously, character-level models are better than the subword-levelmodels, and our model is comparable to the start-of-the-art character-level models. Note that, thepurely character model of (5)(Luong and Manning, 2016) took 3 months to train and yielded +0:5BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section 3.2.Besides, our model is the simplest and the smallest one in terms of the model size.5.2 L EARNING MORPHOLOGY-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.5← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility(a) ordinary RNN word encoder-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2-1.5-1-0.500.511.52← notable← notability← solvable← solvability← reliable← reliability← capable← capability← flexible← flexibility← possible← possibility (b) our word encoderFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words.In this section, we investigate whether our model could learn morphology. First we want to figure outthe difference between an ordinary RNN word encoder and our word encoder. We choose some wordswith similar meaning but different in morphology as shown in Figure 3. We could find in Figure3(a) that the words ending with “ability”, which are encoded by the ordinary RNN word encoder, arejammed together. In contrast, the representations produced by our encoder are more reasonable andthe words with similar meaning are closer.anyever yanyever yanyever yanyever yanyever ywa yonebodyth in gw her ewa yonebodyth in gw her e00.020.040.060.080.10.120.140.160.180.2(a) energy of each character-0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2-0.15-0.1-0.0500.050.10.15← anybody← anyway← anyone← anything← anywhere← everybody← everyway← everyone← everything← everywhere (b) two-dimensional PCA projectionFigure 4: The learnt morphemesThen we analyze how our word encoder learns morphemes and the rules of how they are combined.We demonstrate the encoding details on “any*” and “every*”. Figure 4(a) shows the energy of eachcharacter, more precisely, the energy of preceding characters. We could see that the last characterof a morpheme will result a relative large energy (weight) like “any” and “every” in these words.Moreover, even the preceding characters are different, it will produce a similar weight for the samemorpheme like “way” in “anyway” and “everyway”. The two-dimensional PCA projection in Figure7Under review as a conference paper at ICLR 20174(b) further validates our idea. The word encoder may be able to guess the meaning of “everything”even it had never seen “everything” before, thus speedup learning. More interestingly, we find thatnot only the ending letter has high energy, but also the beginning letter is important. It matches thebehavior of human perception (White et al., 2008).peakenergy consum er s m ay w ant t o m ove t hei r t el ephones a l i t t l e cl oser t o t he t v set <unk> <unk> w at chi ng peakenergy abc ' s m onday ni ght f oot bal l can now vot e dur i ng <unk> f or t he gr eat est pl ay i n N year s f r om peakenergy am ong f our or f i ve <unk> <unk> t w o w eeks ago vi ew er s of sever al nbc <unk> consum er segm ent s Figure 5: Subword-level boundary detected by our word encoder.Moreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b),we are able to detect the boundary of the subword units. As shown in Figure 5, “consumers”,“monday”, “football” and “greatest” are segmented into “consum-er-s”,“mon-day”, “foot-ball” and“great-est” respectively. Since there are no explicit delimiters, it may be more difficult to detect thesubword units.5.3 B ENEFITING FROM LEARNING MORPHOLOGYAs analyzed in Section 5.2, learning morphology could speedup learning. This has also been shownin Table 1 (En-Fr and En-Cs task) from which we see that when we train our model just for oneepoch, the obtained result even outperforms the final result with bpe baseline.Another advantage of our model is the ability to translate the misspelled words or the nonce words.The character-level model has a much better chance recovering the original word or sentence. InTable 2, we list some examples where the source sentences are taken from newstest2013 but wechange some words to misspelled words or nonce words. We also list the translations from Googletranslate3and online demo of neural machine translation by LISA.Table 2: Sample translations.(a) Misspelled wordsSource For the time being howeve their research is unconclusive .Reference Leurs recherches ne sont toutefois pas concluantes pour l’instant.Google translate Pour le moment, leurs recherches ne sont pas concluantes .LISA Pour le moment UNK leur recherche est UNK .DCNMT Pour le moment, cependant , leur recherche n’est pas concluante .(b) Nonce words (morphological change)Source Then we will be able to supplement the real world with virtual objects ina much convenienter form .Reference Ainsi , nous pourrons compléter le monde réel par des objets virtuelsdans une forme plus pratique .Google translate Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .LISA Ensuite, nous serons en mesure de compléter le vrai monde avec desobjets virtuels sous une forme bien UNK .DCNMT Ensuite, nous serons en mesure de compléter le monde réel avec desobjets virtuels dans une forme beaucoup plus pratique .As listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For aword-based translator, it is never possible because the misspelled words are mapped into <unk>3The translations by Google translate were made on Nov 4, 2016.8Under review as a conference paper at ICLR 2017token before translating. Thus, it will produce an <unk> token or just take the word from sourcesentence (Gulcehre et al., 2016; Luong et al., 2015). More interestingly, DCNMT could translate“convenienter” correctly as shown in Table 2(b). By concatenating “convenient” and “er”, we get thecomparative adjective form of “convenient” which never appears in the training set; however, ourmodel guessed it correctly based on the morphemes and the rules.6 C ONCLUSIONIn this paper we have proposed an hierarchical architecture to train the deep character-level neuralmachine translation model by introducing a novel word encoder and a multi-leveled decoder. We havedemonstrated the efficiency of the training process and the effectiveness of the model in comparisonwith the word-level and other character-level models. The BLEU score implies that our deep character-level neural machine translation model likely outperforms the word-level models and is competitivewith the state-of-the-art character-based models. It is possible to further improve performance byusing deeper recurrent networks (Wu et al., 2016), training for more epochs and training with longersentence pairs.As a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue thatword-level models suffer from, and we have obtained a new functionality to translate the misspelled orthe nonce words. More importantly, the deep character-level is able to learn the similar embedding ofthe words with similar meanings like the word-level models. Finally, it would be potentially possiblethat the idea behind our approach could be applied to many other tasks such as speech recognitionand text summarization.REFERENCESIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks.InAdvances in Neural Information Processing Systems , pages 3104–3112, 2014.Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, HolgerSchwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder forstatistical machine translation. Proceedings of the 2014 Conference on Empirical Methods inNatural Language Processing , 2014.Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. International Conference on Learning Representation , 2015.Sébastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very largetarget vocabulary for neural machine translation. Proceedings of the 53rd Annual Meeting of theAssociation for Computational Linguistics , 2015.Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicitsegmentation for neural machine translation. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016a.Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointing theunknown words. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressingthe rare word problem in neural machine translation. Proceedings of the 53rd Annual Meeting ofthe Association for Computational Linguistics , 2015.Minh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machinetranslation with hybrid word-character models. Proceedings of the 54th Annual Meeting of theAssociation for Computational Linguistics , 2016.Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan WBlack, and Isabel Trancoso. Finding function in form: Compositional character models for openvocabulary word representation. Empirical Methods in Natural Language Processing , 2015a.9Under review as a conference paper at ICLR 2017Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. Character-based neural machinetranslation. arXiv preprint arXiv:1511.04586 , 2015b.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Hierar-chical neural network generative models for movie dialogues. arXiv preprint arXiv:1507.04808 ,2015.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words withsubword units. Proceedings of the 54th Annual Meeting of the Association for ComputationalLinguistics , 2016.Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural languagemodels. Association for the Advancement of Artificial Intelligence , 2016.Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translationwithout explicit segmentation. arXiv preprint arXiv:1610.03017 , 2016.Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. Signal Processing,IEEE Transactions on , 45(11):2673–2681, 1997.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, ArnaudBergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements.Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, GuillaumeDesjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPUmath expression compiler. In Proceedings of the Python for Scientific Computing Conference(SciPy) , June 2010. Oral Presentation.Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley,Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXivpreprint arXiv:1506.00619 , 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InternationalConference on Learning Representation , 2015.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. pages 311–318. Association for Computational Linguistics,2002.Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. Multi-way, multilingual neural machine translationwith a shared attention mechanism. In Proceedings of the 2016 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human Language Technologies. , 2016.Sarah J White, Rebecca L Johnson, Simon P Liversedge, and Keith Rayner. Eye movements whenreading transposed text: the importance of word-beginning letters. Journal of ExperimentalPsychology: Human Perception and Performance , 34(5):1261, 2008.Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks.arXiv preprint arXiv:1609.01704 , 2016b.Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation sys-tem: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 ,2016.10Under review as a conference paper at ICLR 2017A D ETAILED DESCRIPTION OF THE MODELHere we describe the implementation using Theano, it should be applicable to other symbolic deeplearning frameworks. We use fto denote the transition of the recurrent network.A.1 S OURCE WORD ENCODERAs illustrated in Section 3.1, the word encoder is based on two recurrent neural networks. We computethe representation of the word ‘anyone’ asranyone = tanh(6Xt=1wtrt);where rt2Rnis an RNN hidden state at time t, computed byrt=f(e(xt);rt1):Eachrtcontains information about the preceding characters. The weight wtof each representationrtis computed bywt= exp( Wwht+bw);where Ww2R12lmaps the vector ht2R2lto a scalar and htis the state of the BiRNN at time t:ht=" !ht ht#: (9) !ht2Rlis the forward state of the BiRNN which is computed by !ht=f(e(xt); !ht1): (10)The backward state ht2Rlis computed similarly, however in a reverse order.A.2 S OURCE SENTENCE ENCODERAfter encoding the words by the source word encoder, we feed the representations to thesource sentence encoder. For example, the source “Hello world </s>” is encoded into a vector[rHello;rworld;r</s>], then the BiRNN sentence encoder encodes this vector into [v1;v2;v3]. The com-putation is the same as Eqn. (9)and Eqn. (10), however the input now changes to the representationof the words.A.3 F IRST-LEVEL DECODERThe first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism.Given the context vector ctfrom encoder, the hidden state ut2Rmof the GRU is computed byut= (1zt)ut1+zt~ut;where~ut=tanh(WrYt1+U[qtut1] +Cct)zt=(WzrYt1+Uzut1+Czct)qt=(WqrYt1+Uqut1+Cqct):rYt1is the representation of the target word which is produced by an ordinary RNN (take the laststate). The context vector ctis computed by the attention mechanism at each step:ct=TxXj=1tjvj;11Under review as a conference paper at ICLR 2017wheretj=exp(etj)PTxk=1exp(etk)etj=Etanh(Weut1+Hehj):E2R1mwhich maps the vector into a scalar. Then the hidden state utis further processed asEqn. (8) before feeding to the second-level decoder:st+1=W1ct+1+W2rYt+W3ut+b:A.4 S ECOND -LEVEL DECODERAs described in Section 3.2, the number of outputs of the first-level decoder is much fewer than thetarget character sequence. It will be intractable to conditionally pick outputs from the the first-leveldecoder when training in batch manner (at least intractable for Theano (Bastien et al., 2012) and othersymbolic deep learning frameworks to build symbolic expressions). We use a matrix R2RTyTto unfold the outputs [s1;:::;sTy]of the first-level decoder ( Tyis the number of words in the targetsentence and Tis the number of characters). Ris a symbolic matrix in the final loss, it is constructedaccording the delimiters in the target sentences when training (see Section 3.2 for the detailedconstruction, note that Ris a tensor in batch training. ). After unfolding, the input of HGRU becomes[si1;:::;siT], that is[si1;:::;siT] = [s1;:::;sTy]R:According to Eqns.(2) to (7), we can compute the probability of each target character :p(ytjfy1;:::;y t1g;x) =softmax (gt):Finally, we could compute the cross-entroy loss and train with SGD algorithm.B S AMPLE TRANSLATIONSWe show additional sample translations in the following Tables.Table 3: Sample translations of En-Fr.Source This " disturbance " produces an electromagnetic wave ( of light , infrared, ultraviolet etc . ) , and this wave is nothing other than a photon - andthus one of the " force carrier " bosons .Reference Quand , en effet , une particule ayant une charge électrique accélère ouchange de direction , cela " dérange " le champ électromagnétique en cetendroit précis , un peu comme un caillou lancé dans un étang .DCNMT Lorsque , en fait , une particule ayant une charge électrique accélère ouchange de direction , cela " perturbe " le champ électromagnétique danscet endroit spécifique , plutôt comme un galet jeté dans un étang .Source Since October , a manifesto , signed by palliative care luminaries includ-ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating todemonstrate their opposition to such an initiative .Reference Depuis le mois d’ octobre , un manifeste , signé de sommités des soinspalliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circulepour témoigner de leur opposition à une telle initiative .DCNMT Depuis octobre , un manifeste , signé par des liminaires de soins palliatifs, dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circulé pourdémontrer leur opposition à une telle initiative .12Under review as a conference paper at ICLR 2017Table 4: Sample translations of En-Cs.Source French troops have left their area of responsibility in Afghanistan (Kapisa and Surobi ) .Reference Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surobi ) .DCNMT Francouzské jednotky opustily svou oblast odpov ˇednosti v Afghánistánu( Kapisa a Surois ) .Source " All the guests were made to feel important and loved " recalls the topmodel , who started working with him during Haute Couture Week Paris, in 1995 .Reference Všichni pozvaní se díky n ˇemu mohli cítit d ̊ uležití a milovaní , " vzpomínátop modelka , která s ním za ˇcala pracovat v pr ̊ ub ˇehu Pa ˇrížského týdnevrcholné módy v roce 1995 .DCNMT " Všichni hosté byli provedeni , aby se cítili d ̊ uležití a milovaní "pˇripomíná nejvyšší model , který s ním za ˇcal pracovat v pr ̊ ub ˇehu tý-deníku Haute Coutupe v Pa ˇríži v roce 1995 .Source " There are so many private weapons factories now , which do not endurecompetition on the international market and throw weapons from underthe counter to the black market , including in Moscow , " says the expert.Reference " V sou ˇcasnosti vznikají soukromé zbroja ˇrské podniky , které nejsoukonkurenceschopné na mezinárodním trhu , a vy ˇrazují zbran ˇe , kterédodávají na ˇcerný trh v ˇcetnˇe Moskvy , " ˇríká tento odborník .DCNMT " V sou ˇcasnosti existuje tolik soukromých zbraní , které nevydržíhospodá ˇrskou sout ˇež na mezinárodním trhu a hodí zbran ˇe pod pultem kˇcernému trhu , v ˇcetnˇe Moskvy , " ˇríká odborník .Table 5: Sample translations of Cs-En.Source Prezident Karzáí nechce zahrani ˇcní kontroly , zejména ne p ˇri pˇríležitostivoleb plánovaných na duben 2014 .Reference President Karzai does not want any foreign controls , particularly on theoccasion of the elections in April 2014 .DCNMT President Karzai does not want foreign controls , particularly in theopportunity of elections planned on April 2014 .Source Manželský pár m ˇel dv ˇe dˇeti , Prestona a Heidi , a dlouhou dobu žili vkalifornském m ˇestˇe Malibu , kde pobývá mnoho celebrit .Reference The couple had two sons , Preston and Heidi , and lived for a long timein the Californian city Malibu , home to many celebrities .DCNMT The married couple had two children , Preston and Heidi , and long livedin the California city of Malibu , where many celebrities resided .Source Trestný ˇcin rouhání je zachován a urážka je nadále zakázána , což bymohlo mít vážné d ̊ usledky pro svobodu vyjad ˇrování , zejména pak protisk .Reference The offence of blasphemy is maintained and insults are now prohibited, which could have serious consequences on freedom of expression ,particularly for the press .DCNMT The criminal action of blasphemy is maintained and insult is still prohib-ited , which could have serious consequences for freedom of expression ,especially for the press .13
Hy80uyQVg
ryEGFD9gl
ICLR.cc/2017/conference/-/paper410/official/review
{"title": "My thoughts", "rating": "5: Marginally below acceptance threshold", "review": "The paper discusses sub modular sum-product networks as a tractable extension for classical sum-product networks. The proposed approach is evaluated on semantic segmentation tasks and some early promising results are provided.\n\nSummary:\n\u2014\u2014\u2014\nI think the paper presents a compelling technique for hierarchical reasoning in MRFs but the experimental results are not yet convincing. Moreover the writing is confusing at times. See below for details.\n\nQuality: I think some of the techniques could be described more carefully to better convey the intuition.\nClarity: Some of the derivations and intuitions could be explained in more detail.\nOriginality: The suggested idea is great.\nSignificance: Since the experimental setup is somewhat limited according to my opinion, significance is hard to judge at this point in time.\n\nDetailed comments:\n\u2014\u2014\u2014\n1. I think the clarity of the paper would benefit significantly from fixes to inaccuracies. E.g., \\alpha-expansion and belief propagation are not `scene-understanding algorithms\u2019 but rather approaches for optimizing energy functions. Computing the MAP state of an SSPN in time sub-linear in the network size seems counterintuitive because it means we are not allowed to visit all the nodes in the network. The term `deep probabilistic model\u2019 should probably be defined. The paper states that InferSSPN computes `the approximate MAP state of the SSPN (equivalently, the optimal parse of the image)\u2019 and I\u2019m wondering how the `approximate MAP state' can be optimal. Etc.\n\n2. Albeit being formulated for scene understanding tasks, no experiments demonstrate the obtained results of the proposed technique. To assess the applicability of the proposed approach a more detailed analysis is required. More specifically, the technique is evaluated on a subset of images which makes comparison to any other approach impossible. According to my opinion, either a conclusive experimental evaluation using, e.g., IoU metric should be given in the paper, or a comparison to publicly available results is possible.\n\n3. To simplify the understanding of the paper a more intuitive high-level description is desirable. Maybe the authors can even provide an intuitive visualization of their approach.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Submodular Sum-product Networks for Scene Understanding
["Abram L. Friesen", "Pedro Domingos"]
Sum-product networks (SPNs) are an expressive class of deep probabilistic models in which inference takes time linear in their size, enabling them to be learned effectively. However, for certain challenging problems, such as scene understanding, the corresponding SPN has exponential size and is thus intractable. In this work, we introduce submodular sum-product networks (SSPNs), an extension of SPNs in which sum-node weights are defined by a submodular energy function. SSPNs combine the expressivity and depth of SPNs with the ability to efficiently compute the MAP state of a combinatorial number of labelings afforded by submodular energies. SSPNs for scene understanding can be understood as representing all possible parses of an image over arbitrary region shapes with respect to an image grammar. Despite this complexity, we develop an efficient and convergent algorithm based on graph cuts for computing the (approximate) MAP state of an SSPN, greatly increasing the expressivity of the SPN model class. Empirically, we show exponential improvements in parsing time compared to traditional inference algorithms such as alpha-expansion and belief propagation, while returning comparable minima.
["Computer vision", "Structured prediction"]
https://openreview.net/forum?id=ryEGFD9gl
https://openreview.net/pdf?id=ryEGFD9gl
https://openreview.net/forum?id=ryEGFD9gl&noteId=Hy80uyQVg
Under review as a conference paper at ICLR 2017SUBMODULAR SUM-PRODUCT NETWORKSFOR SCENE UNDERSTANDINGAbram L. Friesen & Pedro DomingosDepartment of Computer Science and EngineeringUniversity of WashingtonSeattle, WA 98195, USAfafriesen,pedrod g@cs.washington.eduABSTRACTSum-product networks (SPNs) are an expressive class of deep probabilisticmodels in which inference takes time linear in their size, enabling them tobe learned effectively. However, for certain challenging problems, such asscene understanding, the corresponding SPN has exponential size and is thusintractable. In this work, we introduce submodular sum-product networks(SSPNs), an extension of SPNs in which sum-node weights are defined by asubmodular energy function. SSPNs combine the expressivity and depth of SPNswith the ability to efficiently compute the MAP state of a combinatorial numberof labelings afforded by submodular energies. SSPNs for scene understandingcan be understood as representing all possible parses of an image over arbitraryregion shapes with respect to an image grammar. Despite this complexity, wedevelop an efficient and convergent algorithm based on graph cuts for computingthe (approximate) MAP state of an SSPN, greatly increasing the expressivity ofthe SPN model class. Empirically, we show exponential improvements in parsingtime compared to traditional inference algorithms such as -expansion and beliefpropagation, while returning comparable minima.1 I NTRODUCTIONSum-product networks (SPNs) (Poon & Domingos, 2011; Gens & Domingos, 2012) are a class ofdeep probabilistic models that consist of many layers of hidden variables and can have unboundedtreewidth. Despite this depth and corresponding expressivity, exact inference in SPNs is guaranteedto take time linear in their size, allowing their structure and parameters to be learned effectivelyfrom data. However, there are still many models for which the corresponding SPN has size expo-nential in the number of variables and is thus intractable. For example, in scene understanding (orsemantic segmentation), the goal is to label each pixel of an image with its semantic class, whichrequires simultaneously detecting, segmenting, and recognizing each object in the scene. Even thesimplest SPN for scene understanding is intractable, as it must represent the exponentially large setof segmentations of the image into its constituent objects.Scene understanding is commonly formulated as a flat Markov (or conditional) random field (MRF)over the pixels or superpixels of an image (e.g., Shotton et al. (2006); Gould et al. (2009)). Inferencein MRFs is intractable in general; however, there exist restrictions of the MRF that enable tractableinference. For pairwise binary MRFs, if the energy of each pairwise term is submodular (alterna-tively, attractive or regular) (Kolmogorov & Zabih, 2004), meaning that each pair of neighboringpixels prefers to have the same label, then the exact MAP labeling of the MRF can be recovered inlow-order polynomial time through the use of a graph cut algorithm1(Greig et al., 1989; Boykov &Kolmogorov, 2004). This result from the binary case has been used to develop a number of power-ful approximate algorithms for the multi-label case (e.g., Komodakis et al. (2007); Lempitsky et al.(2010)), the most well-known of which is -expansion (Boykov et al., 2001), which efficiently re-turns an approximate labeling that is within a constant factor of the true optimum by solving a seriesof binary graph cut problems. Unfortunately, pairwise MRFs are insufficiently expressive for com-1Formally, a min-cut/max-flow algorithm(Ahuja et al., 1993) on a graph constructed from the MRF.1Under review as a conference paper at ICLR 2017plex tasks such as scene understanding, as they are unable to model high-level relationships, such asconstituency (part-subpart) or subcategorization (superclass-subclass), between arbitrary regions ofthe image, unless these can be encoded in the labels of the MRF and enforced between pairs of (su-per)pixels. However, this encoding requires a combinatorial number of labels, which is intractable.Instead, higher-level structure is needed to efficiently represent these relationships.In this paper, we present submodular sum-product networks (SSPNs), a novel model that combinesthe expressive power of sum-product networks with the tractable segmentation properties of sub-modular energies. An SSPN is a sum-product network in which the weight of each child of a sumnode corresponds to the energy of a particular labeling of a submodular energy function. Equiva-lently, an SSPN over an image corresponds to an instantiation of all possible parse trees of that imagewith respect to a given image grammar, where the probability distribution over the segmentations ofa production on a particular region is defined by a submodular random field over the pixels in thatregion. Importantly, SSPNs permit objects and regions to take arbitrary shapes , instead of restrict-ing the set of possible shapes as has previously been necessary for tractable inference. By exploitingsubmodularity, we develop a highly-efficient approximate inference algorithm, I NFER SSPN, forcomputing the MAP state of the SSPN (equivalently, the optimal parse of the image). I NFER SSPNis an iterative move-making-style algorithm that provably converges to a local minimum of the en-ergy, reduces to -expansion in the case of a trivial grammar, and has complexity O(jGjc(n))foreach iteration, where c(n)is the complexity of a single graph cut and jGjis the size of the grammar.As with other move-making algorithms, I NFER SSPN converges to a local minimum with respectto an exponentially-large set of neighbors, overcoming many of the main issues of local minima(Boykov et al., 2001). Empirically, we compare I NFER SSPN to belief propagation (BP) on a multi-level MRF and to -expansion on an equivalent flat MRF. We show that I NFER SSPN parses imagesin exponentially less time than both of these while returning energies comparable to -expansion,which is guaranteed to return energies within a constant factor of the true optimum.The literature on using higher-level information for scene understanding is vast. We briefly dis-cuss the most relevant work on hierarchical random fields over multiple labels, image grammars forsegmentation, and neural parsing methods. Hierarchical random field models (e.g., Russell et al.(2010); Lempitsky et al. (2011)) define MRFs with multiple layers of hidden variables and thenperform inference, often using graph cuts to efficiently extract the MAP solution. However, thesemodels are typically restricted to just a few layers and to pre-computed segmentations of the image,and thus do not allow arbitrary region shapes. In addition, they require a combinatorial number oflabels to encode complex grammar structures. Previous grammar-based methods for scene under-standing, such as Zhu & Mumford (2006) and Zhao & Zhu (2011), have used MRFs with AND-ORgraphs (Dechter & Mateescu, 2007), but needed to restrict their grammars to a very limited set ofproductions and region shapes in order to perform inference in reasonable time, and are thus muchless expressive than SSPNs. Finally, neural parsing methods such as those in Socher et al. (2011)and Sharma et al. (2014) use recursive neural network architectures over superpixel-based featuresto segment an image; thus, these methods also do not allow arbitrary region shapes. Further, Socheret al. (2011) greedily combine regions to form parse trees, while (Sharma et al., 2014) use randomlygenerated parse trees, whereas inference in SSPNs finds the (approximately) optimal parse tree.2 S UBMODULAR SUM -PRODUCT NETWORKSIn the following, we define submodular sum-product networks (SSPNs) in terms of an image gram-mar because this simplifies the exposition with respect to the structure of the sum-product network(SPN) and because scene understanding is the domain we use to evaluate SSPNs. However, it is notnecessary to define SSPNs in this way, and our results extend to any SPN with sum-node weightsdefined by a random field with submodular potentials. Due to lack of space we refer readers to Gens& Domingos (2012), Poon & Domingos (2011) and Gens & Domingos (2013) for SPN details.With respect to scene understanding, an SSPN defines a generative model of an image and a hierar-chy of regions within that image where each region is labeled with a production (and implicitly bythe head symbol of that production), can have arbitrary shape, and is a subset of the region of eachof its ancestors. An example of an SSPN for parsing a farm scene is shown in Figure 1. Given astarting symbol and the region containing the entire image, the generative process is to first choose aproduction of that symbol into its constituent symbols and then choose a segmentation of the regioninto a set of mutually exclusive and exhaustive subregions, with one subregion per constituent sym-2Under review as a conference paper at ICLR 2017Figure 1: A partial (submodular) sum-product network for parsing an image with respect to the grammarshown. There is a sum node for each nonterminal symbol with a child sum node for each production of thatsymbol. Each sum node for a production has a child product nodefor each possible segmentation of its region.bol. The process then recurses, choosing a production and a segmentation for each subregion givenits symbol. The recursion terminates when one of the constituents is a terminal symbol, at whichpoint the pixels corresponding to that region of the image are generated. This produces a parse treein which each internal node is a pair containing a region and a production of the region, and theleaves are regions of pixels. For each node in a parse tree, the regions of its children are mutuallyexclusive and exhaustive with respect to the parent node’s region. As in a probabilistic context-freegrammar (PCFG) (Jurafsky & Martin, 2000), productions are chosen from a categorical distributionover the productions of the current symbol. Segmentations of a given region, however, are sampledfrom a (submodular) Markov random field (MRF) over the pixels in the region.Formally, let G= (N;;R;S; w)be a non-recursive stochastic grammar, where Nis a finiteset of nonterminal symbols; is a finite set of terminal symbols; Ris a finite set of productionsR=fv:X!Y1Y2:::Y kgwith head symbol X2Nand constituent symbols Yi2N[fori= 1:::k andk >0;S2Nis a distinguished start symbol, meaning that it does not appearon the right-hand side of any production; and ware the weights that parameterize the probabilitydistribution defined by G. For a production v2tin a parse tree t2TG, we denote its regionasPvand its parent and children as pa (v)and ch (v), respectively, where TGis the set of possibleparse trees under the grammar G. The labeling corresponding to the segmentation of the pixelsinPvfor production v:X!Y1:::Y kisyv2YjPvjv, whereYv=fY1;:::;Y kg. The regionof any production v2tis the set of pixels in Ppa(v)whose assigned label is the head of v, i.e.,Pv=fp2P pa(v):ypa(v)p =head(v)g, except for the production of the start symbol, which hasthe entire image as its region. The probability of an image xispw(x) =Pt2TGpw(t;x), wherethe joint probability of parse tree tand the image is the product over all productions in tof theprobability of choosing that production vand then segmenting its region Pvaccording to yv:pw(t;x) =1Zexp(Ew(t;x)) =1Zexp(Xv2tEvw(v;yv;head(v);Pv;x)):Here,Z=Pt2TGexp(Ew(t;x))is the partition function, ware the model parameters, and Eisthe energy function. In the following, we will simplify notation by omitting head (v),Pv,x,w, andsuperscriptvfrom the energy function when they are clear from context. The energy of a productionand its segmentation on the region Pvare given by a pairwise Markov random field (MRF) asE(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w);wherevpandvpqare the unary andpairwise costs of the segmentation MRF, fyvp:p2Pvgis the labeling defining the segmentation ofthe pixels in the current region, and Evare the edges inPv. Without loss of generality we assumethatEvcontains only one of (p;q)or(q;p), since the two terms can always be combined. Here, vpis the per-pixel data cost and vpqis the boundary term, which penalizes adjacent pixels within thesame region that have different labels. We describe these terms in more detail below. In general,even computing the segmentation for a single production is intractable. In order to permit efficientinference, we require that vpqsatisfies the submodularity condition vpq(Y1;Y1) +vpq(Y2;Y2)vpq(Y1;Y2) +vpq(Y2;Y1)for all productions v:X!Y1Y2once the grammar has been convertedto a grammar in which each production has only two constituents, which is always possible andin the worst case increases the grammar size quadratically (Jurafsky & Martin, 2000; Chomsky,3Under review as a conference paper at ICLR 20171959). We also require for every production v2Rand for every production cthat is a descendantofvin the grammar that vpq(yvp;yvq)cpq(ycp;ycq)for all possible labelings (yvp;yvq;ycp;ycq), whereyvp;yvq2Yvandycp;ycq2Yc. This condition ensures that segmentations for higher-level productionsare submodular, no matter what occurs below them. It also encodes the reasonable assumption thathigher-level abstractions are separated by stronger, shorter boundaries (relative to their size), whilelower-level objects are more likely to be composed of smaller, more intricately-shaped regions.The above model defines a sum-product network containing a sum node for each possible region ofeach nonterminal, a product node for each segmentation of each production of each possible regionof each nonterminal, and a leaf function on the pixels of the image for each possible region of theimage for each terminal symbol. The children of the sum node sfor nonterminal Xswith regionPsare all product nodes rwith a production vr:Xs!Y1:::Y kand regionPvr=Ps. Eachproduct node corresponds to a labeling yvrofPvrand the edge to its parent sum node has weightexp(E(v;yvr;Pvr)). The children of product node rare the sum or leaf nodes with matchingregions that correspond to the constituent nonterminals or terminals of vr, respectively. Since theweights of the edges from a sum node to its children correspond to submodular energy functions,we call this a submodular sum-product network (SSPN).A key benefit of SSPNs in comparison to previous grammar-based approaches is that regions canhave arbitrary shapes and are not restricted to a small class of shapes such as rectangles (Poon &Domingos, 2011; Zhao & Zhu, 2011). This flexibility is important when parsing images, as real-world objects and abstractions can take any shape, but it comes with a combinatorial explosion ofpossible parses. However, by exploiting submodularity, we are able to develop an efficient inferencealgorithm for SSPNs, allowing us to efficiently parse images into a hierarchy of arbitrarily-shapedregions and objects, yielding a very expressive model class. This efficiency is despite the size of theunderlying SSPN, which is in general far too large to explicitly instantiate.2.1 MRF SEGMENTATION DETAILSAs discussed above, the energy of each segmentation of a region for a given production is defined bya submodular MRF E(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w):The unary terms inE(v;yv)differ depending on whether the label yvpcorresponds to a terminal or nonterminal symbol.For a terminal T2, the unary terms are a linear function of the image features vp(yvp=T;w) =wPCv+w>TUp, wherewPCvis an element of wthat specifies the cost of vrelative to other productionsandUpis a feature vector representing the local appearance of pixel p. In our experiments, Upis theoutput of a deep neural network. For labels corresponding to a nonterminal X2N, the unary termsarevp(yvp=X;w) =wPCv+cp(ycp), wherecis the child production of vin the current parse treethat contains p, such thatp2Pc. This dependence makes inference challenging, because the choiceof children in the parse tree itself depends on the region that is being parsed as X, which dependson the segmentation this unary is being used to compute.The pairwise terms in E(v;yv)are a recursive version of the standard contrast-dependent pairwiseboundary potential (e.g., Shotton et al. (2006)) defined for each production vand each pair of adja-cent pixelsp;qasvpq(yvp;yvq;w) =wBFvexp(1jjBpBqjj2)[yvp6=yvq]+cpq(ycp;ycq;w), whereis half the average image contrast between all adjacent pixels in an image, wBFvis the boundaryfactor that controls the relative cost of this term for each production, Bpis the pairwise per-pixelfeature vector, cis the same as in the unary term above, and []is the indicator function, which hasvalue 1when its argument is true and is 0otherwise. For each pair of pixels (p;q), only one suchterm will ever be non-zero, because once two pixels are labeled differently at a node in the parsetree, they are placed in separate subtrees and thus never co-occur in any region below the currentnode. In our experiments, Bpare the intensity values for each pixel.3 I NFERENCEScene understanding (or semantic segmentation) requires labeling each pixel of an image with itssemantic class. By constructing a grammar containing a set of nonterminals in one-to-one corre-spondence with the semantic labels and only allowing these symbols to produce terminals, we canrecover the semantic segmentation of an image from a parse tree for this grammar. In the simplestcase, a grammar need contain only one additional production from the start symbol to all othernonterminals. More generally, however, the grammar encodes rich structure about the relationships4Under review as a conference paper at ICLR 2017To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA(a)To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA (b)Figure 2: The two main components of I NFER SSPN: (a) Parsing a region PasX!YZ by fusing twoparses of PasY!ABand asZ!CD, and (b) Improving the parse of PasX!YZby (re)parsing eachof its subregions, taking the union of the new YandZparses of P, and then fusing these new parses.between image regions at various levels of abstraction, including concepts such as composition andsubcategorization. Identifying the relevant structure and relationships for a particular image entailsfinding the best parse of an image xgiven a grammar G(or, equivalently, performing MAP inferencein the corresponding SSPN), i.e., t= arg maxt2TGp(tjx) = arg mint2TGPv2tE(v;yv;x).In PCFGs over sentences (Jurafsky & Martin, 2000), the optimal parse can be recovered exactly intimeO(n3jGj)with the CYK algorithm (Hopcroft & Ullman, 1979), where nis the length of the sen-tence andjGjis the number of productions in the grammar, by iterating over all possible split pointsof the sentence and using dynamic programming to avoid recomputing sub-parses. Unfortunately,for images and other 2-D data types, there are 2npossible segmentations of the data for each binaryproduction, rendering this approach infeasible in general. With an SSPN, however, it is possible toefficiently compute the approximate optimal parse of an image. In our algorithm, I NFER SSPN, thisis done by iteratively constructing parses of different regions in a bottom-up fashion.3.1 P ARSE TREE CONSTRUCTIONGiven a production v:X!Y1Y2and two parse trees t1;t2over the same region Pand with headsymbolsY1;Y2, respectively, then for any labeling yv2fY1;Y2gjPjofPwe can construct a thirdparse treetXover regionPwith root production v, labeling yv, and subtrees t01;t02over regionsP1;P2, respectively, such that Pi=fp2P :yvp=Yigandt0i=ti\Pifor eachi, where theintersection of a parse tree and a region t\P is the new parse tree resulting from intersecting Pwith the region at each node in t. Of course, the quality of the resulting parse tree, tX, dependson the particular labeling (segmentation) yvused. Recall that a parse tree ton regionPhas energyE(t;P) =Pv2tE(v;yv;Pv), which can be written as E(t;P) =Pp2Ptp+P(p;q)2Etpq, wheretp=Pv2tvp(yvp)[p2Pv]andtpq=Pv2tvpq(yvp;yvq)[(p;q)2Ev]. This allows us to definethefusion operation, which is a key subroutine in I NFER SSPN. Note that ijis the Kronecker delta.Definition 1. For a production v:X!Y1;Y2and two parse trees t1;t2over regionPwith headsymbolsY1;Y2thentXis the fusion oft1andt2constructed from the minimum energy labelingyv= arg miny2YjPjvE(v;t1;t2;y), whereE(v;t1;t2;y) =Xp2Pt1pypY1+t2pypY2+X(p;q)2Et1pqypY1yqY1+t2pqypY2yqY2+vpq(Y1;Y2)ypY1yqY2:Figure 2a shows an example of fusing two parse trees to create a new parse tree. Although fusionrequires finding the optimal labeling from an exponentially large set, the energy is submodular andcan be efficiently optimized with a single graph cut. All proofs are presented in the appendix.Proposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Once a parse tree has been constructed, I NFER SSPN then improves that parse tree on subsequentiterations. The following result shows how I NFER SSPN can improve a parse tree while ensuringthat the energy of that parse tree never gets worse.Lemma 1. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvement5Under review as a conference paper at ICLR 2017inE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Finally, it will be useful to define the union t=t1[t2of two parse trees t1;t2that have the sameproduction at their root but are over disjoint regions P1\P 2=;, as the parse tree twith regionP=P1[P 2and in which all nodes that co-occur in both t1andt2(i.e., have the same path to themfrom the root and have the same production) are merged to form a single node in t. In general, tmay be an inconsistent parse tree, as the same symbol may be parsed as two separate productions, inwhich case we define the energy of the boundary terms between the pixels parsed as these separateproductions to be infinite.3.2 I NFER SSPNPseudocode for our algorithm, I NFER SSPN, is presented in Algorithm 1. I NFER SSPN is an iterativebottom-up algorithm based on graph cuts (Kolmogorov & Zabih, 2004) that provably converges to alocal minimum of the energy function. In its first iteration, I NFER SSPN constructs a parse tree overthe full image for each production in the grammar. The parse of each terminal production is trivial toconstruct and simply labels each pixel as the terminal symbol. The parse for every other productionv:X!Y1Y2is constructed by choosing productions for Y1andY2and fusing their correspondingparse trees to get a parse of the image as X. Since the grammar is non-recursive, we can constructa directed acyclic graph (DAG) containing a node for each symbol and an edge from each symbolto each constituent of each production of that symbol and then traverse this graph from the leaves(terminals) to the root (start symbol), fusing the children of each production of each symbol whenwe visit that symbol’s node. Of course, to fuse parses of Y1andY2into a parse of X, we need tochoose which production of Y1(andY2) to fuse; this is done by simply choosing the production ofY1(andY2) that has the lowest energy over the current region. The best parse of the image, ^t, nowcorresponds to the lowest-energy parse of all productions of the start symbol.Further iterations of I NFER SSPN improve ^tin a flexible manner that allows any of its productionsor labelings to change, while also ensuring that its energy never increases. I NFER SSPN does this byagain computing parses of the full image for each production in the grammar. This time, however,when parsing a symbol X, INFER SSPN independently parses each region of the image that wasparsed as any production of Xin^t(none of these regions will overlap because the grammar is non-recursive) and then parses the remainder of the image given these parses of subregions of the image,meaning that the pixels in these other subregions are instantiated in the MRF but fixed to the labelsthat the subregion parses specify. The parse of the image as Xis then constructed as the union ofthese subregion parses. This procedure ensures that the energy will never increase (see Theorem 1and Lemma 1), but also that any subtree of ^tcan be replaced with another subtree if it results inlower energy. Figure 2b shows a simple example of updating a parse of a region as X!YZ.Further, this (re)parsing of subregions can again be achieved in a single bottom-up pass through thegrammar DAG, resulting in a very efficient algorithm for SSPN inference. This is because each pixelonly appears in at most one subregion for any symbol, and thus only ever needs to be parsed onceper production. See Algorithm 1 for more details.3.3 A NALYSISAs shown in Theorem 1, I NFER SSPN always converges to a local minimum of the energy func-tion. Similar to other graph-cut-based algorithms, such as -expansion (Boykov et al., 2001), I N-FERSSPN explores an exponentially large set of moves at each step, so the returned local minimumis much better than those returned by more local procedures, such as max-product belief propaga-tion. Further, we observe convergence within a few iterations in all experiments, with the majorityof the energy improvement occurring in the first iteration.Theorem 1. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t)andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.As shown in Proposition 2, each iteration of I NFER SSPN takes time O(jGjc(n)), wherenis thenumber of pixels in the image and c(n)is the complexity of the underlying graph cut algorithmused, which is low-order polynomial in the worst-case but nearly linear-time in practice (Boykov &Kolmogorov, 2004; Boykov et al., 2001).Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).6Under review as a conference paper at ICLR 2017Algorithm 1 Compute the (approximate) MAP assignment of the SSPN variables (i.e., the produc-tions and labelings) defined by an image and a grammar. This is equivalent to parsing the image.Input: The image x, a non-recursive grammar G= (N;;R;S; w), and (optional) input parse ^t.Output: A parse of the image, t, with energy E(t;x)E(^t;x).1:function INFER SSPN( x;G;^t)2:T;E empty lists of parse trees and energies, respectively, both of length jRj+jj3: foreach terminal Y2do4:T[Y] the trivial parse with all pixels parsed as Y5:E[Y] Pp2xw>YUp6: while the energy of any production of the start symbol Shas not converged do7: foreach symbol X2N, in reverse topological order do //as defined by the DAG of G8: foreach subtree ^tiof^trooted at a production uiwith headXdo9:Pi;yi the region that ^tiis over and its labeling in ^ti //fPigare all disjoint10: foreach production vj:X!Y1Y2do //iterate over all productions of X11: tij;eij FUSE(Pi;yi;vj;T) //parsePiasvjby fusing parses of Y1andY212:PX all pixels that are not in any region Pi13: foreach production vj:X!Y1Y2do //iterate over all productions of X14: yrand a random labeling of PX//use random for initialization15: tX;eX FUSE(PX;yrand;vj;T;([itij)) //parsePXasvjgiven ([itij)16: update lists: T[vj] ([itij)[tXandE[vj] Pieij+eXfor allvjwith headX17: ^t;^e the production of Swith the lowest energy in Eand its energy18: return ^t;^eInput: A regionP, a labeling yofP, a production v:X!Y1Y2, a list of parses T, and anoptional parse tPof pixels not inP, used to set pairwise terms of edges that are leaving P.Output: A parse tree rooted at vover regionPand the energy of that parse tree.1:function FUSE(P;y;v;T;tP)2: foreachYiwithi21;2do3:ui production of YiinTwith lowest energy over fp:yp=YiggiventP4: create submodular energy function E(v;y;P;x)onPfromT[u1],T[u2], andtP5:yv;ev (arg) min yE(v;y;P;x) //label each pixel in PasY1orY2using graph cuts6:tv combineT[u1]andT[u2]according to yvand appendvas the root7: returntv;evNote that a straightforward application of -expansion to image parsing that uses one label for everypossible parse in the grammar requires an exponential number of labels in general.INFER SSPN can be extended to productions with more than two constituents by simply replac-ing the internal graph cut used to fuse subtrees with a multi-label algorithm such as -expansion.INFER SSPN would still converge because each subtree would still never decrease in energy. Analgorithm such as QPBO (Kolmogorov & Rother, 2007) could also be used, which would allow thesubmodularity restriction to be relaxed. Finally, running I NFER SSPN on the grammar containingk1binary productions that results from converting a grammar with a single production on k>2constituents is equivalent to running -expansion on the kconstituents.4 E XPERIMENTSWe evaluated I NFER SSPN by parsing images from the Stanford background dataset (SBD) usinggrammars with generated structure and weights inferred from the pixel labels of the images weparsed. SBD is a standard semantic segmentation dataset containing images with an average size of320240pixels and a total of 8labels. The input features we used were from the Deeplab sys-tem (Chen et al., 2015; 2016) trained on the same images used for evaluation (note that we are notevaluating learning and thus use the same features for each algorithm and evaluate on the trainingdata in order to separate inference performance from generalization performance). We compared I N-FERSSPN to-expansion on a flat pairwise MRF and to max-product belief propagation (BP) on amulti-level (3-D) pairwise grid MRF. Details of these models are provided in the appendix. We note7Under review as a conference paper at ICLR 2017that the flat encoding for -expansion results in a label for each path in the grammar, where thereare an exponential number of such paths in the height of the grammar. However, once -expansionconverges, its energy is within a constant factor of the global minimum energy (Boykov et al., 2001)and thus serves as a good surrogate for the true global minimum, which is intractable to compute.We compared these algorithms by varying three different parameters: boundary strength (strength ofpairwise terms), grammar height, and number of productions per nonterminal. Each grammar usedfor testing contained a start symbol, multiple layers of nonterminals, and a final layer of nonterminalsin one-to-one correspondence with the eight terminal symbols, each of which had a single productionthat produces a region of pixels. The start symbol had one production for each pair of symbols inthe layer below it, and the last nonterminal layer (ignoring the nonterminals for the labels) hadproductions for each pair of labels, distributed uniformly over this last nonterminal layer.Boundary strength. Increasing the boundary strength of an MRF makes inference more challeng-ing, as individual pixel labels cannot be easily flipped without large side effects. To test this, weconstructed a grammar as above with 2layers of nonterminals (not including the start symbol), eachcontaining 3nonterminal symbols with 4binary productions to the next layer. We vary wBFvfor allvand plot the mean average pixel accuracy returned by each algorithm (the x-axis is log-scale) inFigure 3a. I NFER SSPN returns parses with almost identical accuracy (and energy) to -expansion.BP also returns comparable accuracies, but almost always returns invalid parses with infinite energy(if it converges at all) that contain multiple productions of the same object or a production of somesymbol Y even though a pixel is labeled as symbol X.0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPNFigure 3: The mean average pixel accuracy of the returned solution and total running time for each of beliefpropagation, -expansion, and I NFER SSPN when varying (a) boundary strength, (b) grammar height, and (c)number of productions. Each data point is the average value over (the same) 10images. Missing data pointsindicate out of memory errors. Figures 4, 5, and 6 in the appendix show all results for each experiment.Grammar height. In general, the number of paths in the grammar is exponential in its height, sothe height of the grammar controls the complexity of inference and thus the difficulty of parsingimages. For this experiment, we set the boundary scale factor to 10and constructed a grammar withfour nonterminals per layer, each with three binary productions to the next layer. Figure 3b showsthe effect of grammar height on total inference time (to convergence or a maximum number of iter-ations, whichever first occurred). As expected from Proposition 2, the time taken for I NFER SSPNscales linearly with the height of the grammar, which is within a constant factor of the size of thegrammar when all other parameters are fixed. Similarly, inference time for both -expansion and BPscaled exponentially with the height of the grammar because the number of labels for both increasescombinatorially. Again, the energies and corresponding accuracies achieved by I NFER SSPN werenearly identical to those of -expansion (see Figure 5 in the appendix).Productions per nonterminal. The number of paths in the grammar is also directly affected by thenumber of productions per symbol. For this experiment, we increased each pairwise term by a factorof10and constructed a grammar with 2layers of nonterminals, each with 4nonterminal symbols.Figure 3c shows the effect of increasing the number of productions per nonterminal, which againdemonstrates that I NFER SSPN is far more efficient than either -expansion or BP as the complexityof the grammar increases, while still finding comparable solutions (see Figure 6 in the appendix).5 C ONCLUSIONThis paper proposed submodular sum-product networks (SSPNs), a novel extension of sum-productnetworks that can be understood as an instantiation of an image grammar in which all possibleparses of an image over arbitrary shapes are represented. Despite this complexity, we presented8Under review as a conference paper at ICLR 2017INFER SSPN, a move-making algorithm that exploits submodularity in order to find the (approxi-mate) MAP state of an SSPN, which is equivalent to finding the (approximate) optimal parse of animage. Analytically, we showed that I NFER SSPN is both very efficient – each iteration takes timelinear in the size of the grammar and the complexity of one graph cut – and convergent. Empiri-cally, we showed that I NFER SSPN achieves accuracies and energies comparable to -expansion,which is guaranteed to return optima within a constant factor of the global optimum, while takingexponentially less time to do so.We have begun work on learning the structure and parameters of SSPNs from data. This is a particu-larly promising avenue of research because many recent works have demonstrated that learning boththe structure and parameters of sum-product networks from data is feasible and effective, despite thewell-known difficulty of grammar induction. We also plan to apply SSPNs to additional domains,such as activity recognition, social network modeling, and probabilistic knowledge bases.ACKNOWLEDGMENTSAF would like to thank Robert Gens and Rahul Kidambi for useful discussions and insights, andGena Barnabee for assisting with Figure 1 and for feedback on this document. This research waspartly funded by ONR grant N00014-16-1-2697 and AFRL contract FA8750-13-2-0019. The viewsand conclusions contained in this document are those of the authors and should not be interpretedas necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.REFERENCESRavindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. Network flows: theory, algorithmsand applications. Network , 1:864, 1993.Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algo-rithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and MachineIntelligence , 26(9):1124–1137, 2004.Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence , 23(11):1222–1239, 2001.Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Se-mantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. Pro-ceedings of the International Conference on Learning Representations , 2015. URL http://arxiv.org/abs/1412.7062 .Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille.DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution,and Fully Connected CRFs. In ArXiv e-prints , 2016. ISBN 9783901608353. URL http://arxiv.org/abs/1412.7062 .Noam Chomsky. On Certain Formal Properties of Grammars. Information and Control , 2:137–167,1959. ISSN 07745141.Rina Dechter and Robert Mateescu. AND/OR search spaces for graphical models. Artificial intelli-gence , 171:73–106, 2007.Robert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In Advancesin Neural Information Processing Systems , pp. 3239–3247, 2012. ISBN 9781627480031.Robert Gens and Pedro Domingos. Learning the structure of sum-product networks. In Proceedingsof the 30th International Conference on Machine Learning , pp. 873–880, 2013.Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and se-mantically consistent regions. In Proceedings of the IEEE International Conference on ComputerVision , pp. 1–8, 2009.9Under review as a conference paper at ICLR 2017D. M. Greig, B.T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binaryimages. Journal of the Royal Statistical Society. Series B (Methodological) , 51(2):271–279, 1989.John Hopcroft and Jeffrey Ullman. Introduction to Automata Theory, Languages, and Computation .Addison-Wesley, Reading MA, 1979.Daniel S. Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Nat-ural Language Processing, Computational Linguistics, and Speech Recognition . Prentice Hall,2000. ISBN 9780135041963. doi: 10.1162/089120100750105975.Vladimir Kolmogorov and Carsten Rother. Minimizing nonsubmodular functions with graph cuts -a review. IEEE transactions on pattern analysis and machine intelligence , 29(7):1274–9, 2007.ISSN 0162-8828. doi: 10.1109/TPAMI.2007.1031.Vladimir Kolmogorov and Ramin Zabih. What Energy Functions Can Be Minimized via GraphCuts? IEEE Transactions on Pattern Analysis and Machine Intelligence , 26(2):147–159, 2004.ISSN 01628828. doi: 10.1109/TPAMI.2004.1262177.Nikos Komodakis, Georgios Tziritas, and Nikos Paragios. Fast, approximately optimal solutionsfor single and dynamic MRFs. In Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition , 2007. ISBN 1424411807. doi: 10.1109/CVPR.2007.383095.Victor Lempitsky, Carsten Rother, Stefan Roth, and Andrew Blake. Fusion Moves for MarkovRandom Field Optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence ,32(8):1392–1405, 2010.Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. A Pylon Model for Semantic Segmen-tation. In Neural Information Processing Systems , number 228180, pp. 1–9, 2011.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In Proceed-ings of the 27th Conference on Uncertainty in Artificial Intelligence , pp. 337–346. AUAI Press,2011.Chris Russell, Lubor Ladick ́y, Pushmeet Kohli, and Philip H.S. Torr. Exact and Approximate In-ference in Associative Hierarchical Networks using Graph Cuts. The 26th Conference on Uncer-tainty in Artificial Intelligence , pp. 1–8, 2010.Abhishek Sharma, Oncel Tuzel, and Ming-Yu Liu. Recursive Context Propagation Network forSemantic Scene Labeling. In Advances in Neural Information Processing Systems , pp. 2447–2455, 2014.Jamie Shotton, John Winn, Carsten Rother, and Antonio Criminisi. TextonBoost: Joint Appear-ance, Shape and Conext Modeling for Muli-class object Recognition and Segmentation. Pro-ceedings European Conference on Computer Vision (ECCV) , 3951(Chapter 1):1–15, 2006. ISSN09205691.Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y . Ng. Parsing natural scenes and nat-ural language with recursive neural networks. In Proceedings of the 28th International Con-ference on Machine Learning , pp. 129–136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9.Yibiao Zhao and Song-Chun Zhu. Image Parsing via Stochastic Scene Grammar. In Advances inNeural Information Processing Systems , pp. 1–9, 2011.Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and Trendsin Computer Graphics and Vision , 2(4):259–362, 2006. ISSN 1572-2740. doi: 10.1561/0600000018.10Under review as a conference paper at ICLR 2017A P ROOFSProposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Proof.E(v;t1;t2)is submodular as long as 2vpq(Y1;Y2)t1pq+t2pq, which is true by construc-tion, sincevpq(yvp;yvq)cpq(ycp;ycq)forcany possible descendant of vand for all labelings.Lemma 2. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvementinE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Proof. Since the optimal fusion can be found exactly, and the energy of the current labeling yvhasimproved by , the optimal fusion will have improved by at least .Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).Proof. Letkbe the number of productions per nonterminal symbol and Nbe the nonterminals. Foreach nonterminal, F USE is calledktimes for each region and once for the remainder of the pixels.FUSE itself has complexity O(jPj+c(jPj) =O(c(jPj))when called with region P. However, inINFER SSPN each pixel is processed only once for each symbol because no regions overlap, so theworst-case complexity occurs when each symbol has only one region, and thus the total complexityof each iteration of I NFER SSPN isO(jNjkc(n)) =O(jGjc(n)).Theorem 2. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t), andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.Proof. We will prove by induction that for all nodes ni2^twith corresponding subtree ^ti, regionPi, production vi:X!Y1Y2and child subtrees ^t1;^t2, thatE(ti)E(^ti)after one iteration forallti=T[vi]\Pi. Since this holds for every production of Sover the image, this proves the claim.Base case. When ^tiis the subtree with region Piand production vi:X!Ycontaining only asingle terminal child, then by definition ti=T[vi]\Pi=^tibecause terminal parses do not changegiven the same region. Thus, E(ti) =E(^ti)and the claim holds.Induction step. Letvi:X!Y1Y2be the production for a node in ^tiwith subtrees ^t1;^t2overregionsP1;P2, respectively, such that P1[P 2=PiandP1\P 2=;, and suppose that for allproductions u1jwith headY1and all productions u2kwith headY2and corresponding parse treest1j=T[u1j]\P 1andt2k=T[u2k]\P 2, respectively, that E(t1j)E(^t1j)andE(t2k)E(^t2k).Now, when F USE is called on region P1it will choose the subtrees t1j:j= arg minjE(t1j;P1),andt2k:k= arg minkE(t2k;P2)and fuse these into t0ioverP. However, from Lemma 1, weknow thatticould at the very least simply reuse the labeling yvthat partitionsPintoP1;P2andin doing so return a tree t0iwith energy E(t0i)E(^ti), because each of its subtrees over their sameregions has lower (or equal) energy to those in ^t. Finally, since t0iis computed independently of anyother trees for region Pand then placed into T[vi]as a union of other trees, then ti=T[vi]\P=t0i,and the claim follows.B A DDITIONAL EXPERIMENTAL RESULTS AND DETAILSWe compared I NFER SSPN to running -expansion on a flat pairwise MRF and to max-product be-lief propagation over a multi-level (3-D) pairwise grid MRF. Each label of the flat MRF correspondsto a possible path in the grammar from the start symbol to a production to one of its constituentsymbols, etc, until reaching a terminal. In general, the number of such paths is exponential in theheight of the grammar. The unary terms are the sum of unary terms along the path and the pairwiseterm for a pair of labels is the pairwise term of the first production at which their constituents differ.For any two labels with paths that choose a different production of the same symbol (and have thesame path from the start symbol) we assign infinite cost to enforce the restriction that an object canonly have a single production of it into constituents. Note that after convergence -expansion is11Under review as a conference paper at ICLR 2017guaranteed to be within a constant factor of the global minimum energy (Boykov et al., 2001) andthus serves as a good surrogate for the true global minimum, which is intractable to compute. Themulti-layer MRF is constructed similarly. The number of levels in the MRF is equal to the heightof the DAG corresponding to the grammar used. The labels at a particular level of the MRF areall (production, constituent) pairs that can occur at this height in the grammar. The pairwise termbetween the same pixel in two levels is 0when the parent label’s constituent equals the child label’sproduction head, and 1otherwise. Pairwise terms within a layer are defined as in the flat MRF withinfinite cost for incompatible labels (i.e., two neighboring productions of the same symbol), unlesstwo copies of that nonterminal could be produced at that level by the grammar.All experiments were run on the same computer running an Intel Core i7-5960X with 8 cores and128MB of RAM. Each algorithm was limited to a single thread.0.1 0.3 1 3 10 30 100Boundary scale factor-8-7-6-5Minimum energy×105BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor50010001500Time (s)BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPNFigure 4: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying boundary strength.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).0 1 2 3 4 5 6Grammar height-8-6-4Minimum energy×105BPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN0 1 2 3 4 5 6Grammar height0.50.60.70.80.9AccuracyBPα-expSSPNFigure 5: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left). Low accuracies for grammar height 0are a result of the grammar being insufficiently expressive.1 2 3 4 5 6#productions per nonterminal-8-7-6-5Minimum energy×105BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal0.50.60.70.80.9AccuracyBPα-expSSPNFigure 6: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).12
SJQ0axzNg
ryEGFD9gl
ICLR.cc/2017/conference/-/paper410/official/review
{"title": "Interesting idea that needs to be fully developed and evaluated", "rating": "4: Ok but not good enough - rejection", "review": "This paper is about submodular sum product networks applied to scene understanding. SPNs have shown great success in deep linear models since the work of Poon 2011. The authors propose an extension to the initial SPNs model to be submodular, introducing submodular unary and pairwise potentials. The authors propose a new inference algorithm. The authors evaluated their results on Stanford Background Dataset and compared against multiple baselines.\n\nPros:\n+ New formulation of SPNs \n+ New inference algorithm\n\nCons:\n- The authors did not discuss how the SSPN structure is learned and how the generative process chooses the a symbol (operation) at each level)\n- The evaluations is lacking. The authors only showed results on their own approach and baselines, leaving out every other approach. Evaluations could have been also done on BSD for regular image segmentation (hierarchical segmentation). \n\nThe idea is great, however, the paper needs more work to be published. I would also recommend for the authors to include more details about their approach and present a full paper with extended experiments and full learning approach.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Submodular Sum-product Networks for Scene Understanding
["Abram L. Friesen", "Pedro Domingos"]
Sum-product networks (SPNs) are an expressive class of deep probabilistic models in which inference takes time linear in their size, enabling them to be learned effectively. However, for certain challenging problems, such as scene understanding, the corresponding SPN has exponential size and is thus intractable. In this work, we introduce submodular sum-product networks (SSPNs), an extension of SPNs in which sum-node weights are defined by a submodular energy function. SSPNs combine the expressivity and depth of SPNs with the ability to efficiently compute the MAP state of a combinatorial number of labelings afforded by submodular energies. SSPNs for scene understanding can be understood as representing all possible parses of an image over arbitrary region shapes with respect to an image grammar. Despite this complexity, we develop an efficient and convergent algorithm based on graph cuts for computing the (approximate) MAP state of an SSPN, greatly increasing the expressivity of the SPN model class. Empirically, we show exponential improvements in parsing time compared to traditional inference algorithms such as alpha-expansion and belief propagation, while returning comparable minima.
["Computer vision", "Structured prediction"]
https://openreview.net/forum?id=ryEGFD9gl
https://openreview.net/pdf?id=ryEGFD9gl
https://openreview.net/forum?id=ryEGFD9gl&noteId=SJQ0axzNg
Under review as a conference paper at ICLR 2017SUBMODULAR SUM-PRODUCT NETWORKSFOR SCENE UNDERSTANDINGAbram L. Friesen & Pedro DomingosDepartment of Computer Science and EngineeringUniversity of WashingtonSeattle, WA 98195, USAfafriesen,pedrod g@cs.washington.eduABSTRACTSum-product networks (SPNs) are an expressive class of deep probabilisticmodels in which inference takes time linear in their size, enabling them tobe learned effectively. However, for certain challenging problems, such asscene understanding, the corresponding SPN has exponential size and is thusintractable. In this work, we introduce submodular sum-product networks(SSPNs), an extension of SPNs in which sum-node weights are defined by asubmodular energy function. SSPNs combine the expressivity and depth of SPNswith the ability to efficiently compute the MAP state of a combinatorial numberof labelings afforded by submodular energies. SSPNs for scene understandingcan be understood as representing all possible parses of an image over arbitraryregion shapes with respect to an image grammar. Despite this complexity, wedevelop an efficient and convergent algorithm based on graph cuts for computingthe (approximate) MAP state of an SSPN, greatly increasing the expressivity ofthe SPN model class. Empirically, we show exponential improvements in parsingtime compared to traditional inference algorithms such as -expansion and beliefpropagation, while returning comparable minima.1 I NTRODUCTIONSum-product networks (SPNs) (Poon & Domingos, 2011; Gens & Domingos, 2012) are a class ofdeep probabilistic models that consist of many layers of hidden variables and can have unboundedtreewidth. Despite this depth and corresponding expressivity, exact inference in SPNs is guaranteedto take time linear in their size, allowing their structure and parameters to be learned effectivelyfrom data. However, there are still many models for which the corresponding SPN has size expo-nential in the number of variables and is thus intractable. For example, in scene understanding (orsemantic segmentation), the goal is to label each pixel of an image with its semantic class, whichrequires simultaneously detecting, segmenting, and recognizing each object in the scene. Even thesimplest SPN for scene understanding is intractable, as it must represent the exponentially large setof segmentations of the image into its constituent objects.Scene understanding is commonly formulated as a flat Markov (or conditional) random field (MRF)over the pixels or superpixels of an image (e.g., Shotton et al. (2006); Gould et al. (2009)). Inferencein MRFs is intractable in general; however, there exist restrictions of the MRF that enable tractableinference. For pairwise binary MRFs, if the energy of each pairwise term is submodular (alterna-tively, attractive or regular) (Kolmogorov & Zabih, 2004), meaning that each pair of neighboringpixels prefers to have the same label, then the exact MAP labeling of the MRF can be recovered inlow-order polynomial time through the use of a graph cut algorithm1(Greig et al., 1989; Boykov &Kolmogorov, 2004). This result from the binary case has been used to develop a number of power-ful approximate algorithms for the multi-label case (e.g., Komodakis et al. (2007); Lempitsky et al.(2010)), the most well-known of which is -expansion (Boykov et al., 2001), which efficiently re-turns an approximate labeling that is within a constant factor of the true optimum by solving a seriesof binary graph cut problems. Unfortunately, pairwise MRFs are insufficiently expressive for com-1Formally, a min-cut/max-flow algorithm(Ahuja et al., 1993) on a graph constructed from the MRF.1Under review as a conference paper at ICLR 2017plex tasks such as scene understanding, as they are unable to model high-level relationships, such asconstituency (part-subpart) or subcategorization (superclass-subclass), between arbitrary regions ofthe image, unless these can be encoded in the labels of the MRF and enforced between pairs of (su-per)pixels. However, this encoding requires a combinatorial number of labels, which is intractable.Instead, higher-level structure is needed to efficiently represent these relationships.In this paper, we present submodular sum-product networks (SSPNs), a novel model that combinesthe expressive power of sum-product networks with the tractable segmentation properties of sub-modular energies. An SSPN is a sum-product network in which the weight of each child of a sumnode corresponds to the energy of a particular labeling of a submodular energy function. Equiva-lently, an SSPN over an image corresponds to an instantiation of all possible parse trees of that imagewith respect to a given image grammar, where the probability distribution over the segmentations ofa production on a particular region is defined by a submodular random field over the pixels in thatregion. Importantly, SSPNs permit objects and regions to take arbitrary shapes , instead of restrict-ing the set of possible shapes as has previously been necessary for tractable inference. By exploitingsubmodularity, we develop a highly-efficient approximate inference algorithm, I NFER SSPN, forcomputing the MAP state of the SSPN (equivalently, the optimal parse of the image). I NFER SSPNis an iterative move-making-style algorithm that provably converges to a local minimum of the en-ergy, reduces to -expansion in the case of a trivial grammar, and has complexity O(jGjc(n))foreach iteration, where c(n)is the complexity of a single graph cut and jGjis the size of the grammar.As with other move-making algorithms, I NFER SSPN converges to a local minimum with respectto an exponentially-large set of neighbors, overcoming many of the main issues of local minima(Boykov et al., 2001). Empirically, we compare I NFER SSPN to belief propagation (BP) on a multi-level MRF and to -expansion on an equivalent flat MRF. We show that I NFER SSPN parses imagesin exponentially less time than both of these while returning energies comparable to -expansion,which is guaranteed to return energies within a constant factor of the true optimum.The literature on using higher-level information for scene understanding is vast. We briefly dis-cuss the most relevant work on hierarchical random fields over multiple labels, image grammars forsegmentation, and neural parsing methods. Hierarchical random field models (e.g., Russell et al.(2010); Lempitsky et al. (2011)) define MRFs with multiple layers of hidden variables and thenperform inference, often using graph cuts to efficiently extract the MAP solution. However, thesemodels are typically restricted to just a few layers and to pre-computed segmentations of the image,and thus do not allow arbitrary region shapes. In addition, they require a combinatorial number oflabels to encode complex grammar structures. Previous grammar-based methods for scene under-standing, such as Zhu & Mumford (2006) and Zhao & Zhu (2011), have used MRFs with AND-ORgraphs (Dechter & Mateescu, 2007), but needed to restrict their grammars to a very limited set ofproductions and region shapes in order to perform inference in reasonable time, and are thus muchless expressive than SSPNs. Finally, neural parsing methods such as those in Socher et al. (2011)and Sharma et al. (2014) use recursive neural network architectures over superpixel-based featuresto segment an image; thus, these methods also do not allow arbitrary region shapes. Further, Socheret al. (2011) greedily combine regions to form parse trees, while (Sharma et al., 2014) use randomlygenerated parse trees, whereas inference in SSPNs finds the (approximately) optimal parse tree.2 S UBMODULAR SUM -PRODUCT NETWORKSIn the following, we define submodular sum-product networks (SSPNs) in terms of an image gram-mar because this simplifies the exposition with respect to the structure of the sum-product network(SPN) and because scene understanding is the domain we use to evaluate SSPNs. However, it is notnecessary to define SSPNs in this way, and our results extend to any SPN with sum-node weightsdefined by a random field with submodular potentials. Due to lack of space we refer readers to Gens& Domingos (2012), Poon & Domingos (2011) and Gens & Domingos (2013) for SPN details.With respect to scene understanding, an SSPN defines a generative model of an image and a hierar-chy of regions within that image where each region is labeled with a production (and implicitly bythe head symbol of that production), can have arbitrary shape, and is a subset of the region of eachof its ancestors. An example of an SSPN for parsing a farm scene is shown in Figure 1. Given astarting symbol and the region containing the entire image, the generative process is to first choose aproduction of that symbol into its constituent symbols and then choose a segmentation of the regioninto a set of mutually exclusive and exhaustive subregions, with one subregion per constituent sym-2Under review as a conference paper at ICLR 2017Figure 1: A partial (submodular) sum-product network for parsing an image with respect to the grammarshown. There is a sum node for each nonterminal symbol with a child sum node for each production of thatsymbol. Each sum node for a production has a child product nodefor each possible segmentation of its region.bol. The process then recurses, choosing a production and a segmentation for each subregion givenits symbol. The recursion terminates when one of the constituents is a terminal symbol, at whichpoint the pixels corresponding to that region of the image are generated. This produces a parse treein which each internal node is a pair containing a region and a production of the region, and theleaves are regions of pixels. For each node in a parse tree, the regions of its children are mutuallyexclusive and exhaustive with respect to the parent node’s region. As in a probabilistic context-freegrammar (PCFG) (Jurafsky & Martin, 2000), productions are chosen from a categorical distributionover the productions of the current symbol. Segmentations of a given region, however, are sampledfrom a (submodular) Markov random field (MRF) over the pixels in the region.Formally, let G= (N;;R;S; w)be a non-recursive stochastic grammar, where Nis a finiteset of nonterminal symbols; is a finite set of terminal symbols; Ris a finite set of productionsR=fv:X!Y1Y2:::Y kgwith head symbol X2Nand constituent symbols Yi2N[fori= 1:::k andk >0;S2Nis a distinguished start symbol, meaning that it does not appearon the right-hand side of any production; and ware the weights that parameterize the probabilitydistribution defined by G. For a production v2tin a parse tree t2TG, we denote its regionasPvand its parent and children as pa (v)and ch (v), respectively, where TGis the set of possibleparse trees under the grammar G. The labeling corresponding to the segmentation of the pixelsinPvfor production v:X!Y1:::Y kisyv2YjPvjv, whereYv=fY1;:::;Y kg. The regionof any production v2tis the set of pixels in Ppa(v)whose assigned label is the head of v, i.e.,Pv=fp2P pa(v):ypa(v)p =head(v)g, except for the production of the start symbol, which hasthe entire image as its region. The probability of an image xispw(x) =Pt2TGpw(t;x), wherethe joint probability of parse tree tand the image is the product over all productions in tof theprobability of choosing that production vand then segmenting its region Pvaccording to yv:pw(t;x) =1Zexp(Ew(t;x)) =1Zexp(Xv2tEvw(v;yv;head(v);Pv;x)):Here,Z=Pt2TGexp(Ew(t;x))is the partition function, ware the model parameters, and Eisthe energy function. In the following, we will simplify notation by omitting head (v),Pv,x,w, andsuperscriptvfrom the energy function when they are clear from context. The energy of a productionand its segmentation on the region Pvare given by a pairwise Markov random field (MRF) asE(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w);wherevpandvpqare the unary andpairwise costs of the segmentation MRF, fyvp:p2Pvgis the labeling defining the segmentation ofthe pixels in the current region, and Evare the edges inPv. Without loss of generality we assumethatEvcontains only one of (p;q)or(q;p), since the two terms can always be combined. Here, vpis the per-pixel data cost and vpqis the boundary term, which penalizes adjacent pixels within thesame region that have different labels. We describe these terms in more detail below. In general,even computing the segmentation for a single production is intractable. In order to permit efficientinference, we require that vpqsatisfies the submodularity condition vpq(Y1;Y1) +vpq(Y2;Y2)vpq(Y1;Y2) +vpq(Y2;Y1)for all productions v:X!Y1Y2once the grammar has been convertedto a grammar in which each production has only two constituents, which is always possible andin the worst case increases the grammar size quadratically (Jurafsky & Martin, 2000; Chomsky,3Under review as a conference paper at ICLR 20171959). We also require for every production v2Rand for every production cthat is a descendantofvin the grammar that vpq(yvp;yvq)cpq(ycp;ycq)for all possible labelings (yvp;yvq;ycp;ycq), whereyvp;yvq2Yvandycp;ycq2Yc. This condition ensures that segmentations for higher-level productionsare submodular, no matter what occurs below them. It also encodes the reasonable assumption thathigher-level abstractions are separated by stronger, shorter boundaries (relative to their size), whilelower-level objects are more likely to be composed of smaller, more intricately-shaped regions.The above model defines a sum-product network containing a sum node for each possible region ofeach nonterminal, a product node for each segmentation of each production of each possible regionof each nonterminal, and a leaf function on the pixels of the image for each possible region of theimage for each terminal symbol. The children of the sum node sfor nonterminal Xswith regionPsare all product nodes rwith a production vr:Xs!Y1:::Y kand regionPvr=Ps. Eachproduct node corresponds to a labeling yvrofPvrand the edge to its parent sum node has weightexp(E(v;yvr;Pvr)). The children of product node rare the sum or leaf nodes with matchingregions that correspond to the constituent nonterminals or terminals of vr, respectively. Since theweights of the edges from a sum node to its children correspond to submodular energy functions,we call this a submodular sum-product network (SSPN).A key benefit of SSPNs in comparison to previous grammar-based approaches is that regions canhave arbitrary shapes and are not restricted to a small class of shapes such as rectangles (Poon &Domingos, 2011; Zhao & Zhu, 2011). This flexibility is important when parsing images, as real-world objects and abstractions can take any shape, but it comes with a combinatorial explosion ofpossible parses. However, by exploiting submodularity, we are able to develop an efficient inferencealgorithm for SSPNs, allowing us to efficiently parse images into a hierarchy of arbitrarily-shapedregions and objects, yielding a very expressive model class. This efficiency is despite the size of theunderlying SSPN, which is in general far too large to explicitly instantiate.2.1 MRF SEGMENTATION DETAILSAs discussed above, the energy of each segmentation of a region for a given production is defined bya submodular MRF E(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w):The unary terms inE(v;yv)differ depending on whether the label yvpcorresponds to a terminal or nonterminal symbol.For a terminal T2, the unary terms are a linear function of the image features vp(yvp=T;w) =wPCv+w>TUp, wherewPCvis an element of wthat specifies the cost of vrelative to other productionsandUpis a feature vector representing the local appearance of pixel p. In our experiments, Upis theoutput of a deep neural network. For labels corresponding to a nonterminal X2N, the unary termsarevp(yvp=X;w) =wPCv+cp(ycp), wherecis the child production of vin the current parse treethat contains p, such thatp2Pc. This dependence makes inference challenging, because the choiceof children in the parse tree itself depends on the region that is being parsed as X, which dependson the segmentation this unary is being used to compute.The pairwise terms in E(v;yv)are a recursive version of the standard contrast-dependent pairwiseboundary potential (e.g., Shotton et al. (2006)) defined for each production vand each pair of adja-cent pixelsp;qasvpq(yvp;yvq;w) =wBFvexp(1jjBpBqjj2)[yvp6=yvq]+cpq(ycp;ycq;w), whereis half the average image contrast between all adjacent pixels in an image, wBFvis the boundaryfactor that controls the relative cost of this term for each production, Bpis the pairwise per-pixelfeature vector, cis the same as in the unary term above, and []is the indicator function, which hasvalue 1when its argument is true and is 0otherwise. For each pair of pixels (p;q), only one suchterm will ever be non-zero, because once two pixels are labeled differently at a node in the parsetree, they are placed in separate subtrees and thus never co-occur in any region below the currentnode. In our experiments, Bpare the intensity values for each pixel.3 I NFERENCEScene understanding (or semantic segmentation) requires labeling each pixel of an image with itssemantic class. By constructing a grammar containing a set of nonterminals in one-to-one corre-spondence with the semantic labels and only allowing these symbols to produce terminals, we canrecover the semantic segmentation of an image from a parse tree for this grammar. In the simplestcase, a grammar need contain only one additional production from the start symbol to all othernonterminals. More generally, however, the grammar encodes rich structure about the relationships4Under review as a conference paper at ICLR 2017To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA(a)To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA (b)Figure 2: The two main components of I NFER SSPN: (a) Parsing a region PasX!YZ by fusing twoparses of PasY!ABand asZ!CD, and (b) Improving the parse of PasX!YZby (re)parsing eachof its subregions, taking the union of the new YandZparses of P, and then fusing these new parses.between image regions at various levels of abstraction, including concepts such as composition andsubcategorization. Identifying the relevant structure and relationships for a particular image entailsfinding the best parse of an image xgiven a grammar G(or, equivalently, performing MAP inferencein the corresponding SSPN), i.e., t= arg maxt2TGp(tjx) = arg mint2TGPv2tE(v;yv;x).In PCFGs over sentences (Jurafsky & Martin, 2000), the optimal parse can be recovered exactly intimeO(n3jGj)with the CYK algorithm (Hopcroft & Ullman, 1979), where nis the length of the sen-tence andjGjis the number of productions in the grammar, by iterating over all possible split pointsof the sentence and using dynamic programming to avoid recomputing sub-parses. Unfortunately,for images and other 2-D data types, there are 2npossible segmentations of the data for each binaryproduction, rendering this approach infeasible in general. With an SSPN, however, it is possible toefficiently compute the approximate optimal parse of an image. In our algorithm, I NFER SSPN, thisis done by iteratively constructing parses of different regions in a bottom-up fashion.3.1 P ARSE TREE CONSTRUCTIONGiven a production v:X!Y1Y2and two parse trees t1;t2over the same region Pand with headsymbolsY1;Y2, respectively, then for any labeling yv2fY1;Y2gjPjofPwe can construct a thirdparse treetXover regionPwith root production v, labeling yv, and subtrees t01;t02over regionsP1;P2, respectively, such that Pi=fp2P :yvp=Yigandt0i=ti\Pifor eachi, where theintersection of a parse tree and a region t\P is the new parse tree resulting from intersecting Pwith the region at each node in t. Of course, the quality of the resulting parse tree, tX, dependson the particular labeling (segmentation) yvused. Recall that a parse tree ton regionPhas energyE(t;P) =Pv2tE(v;yv;Pv), which can be written as E(t;P) =Pp2Ptp+P(p;q)2Etpq, wheretp=Pv2tvp(yvp)[p2Pv]andtpq=Pv2tvpq(yvp;yvq)[(p;q)2Ev]. This allows us to definethefusion operation, which is a key subroutine in I NFER SSPN. Note that ijis the Kronecker delta.Definition 1. For a production v:X!Y1;Y2and two parse trees t1;t2over regionPwith headsymbolsY1;Y2thentXis the fusion oft1andt2constructed from the minimum energy labelingyv= arg miny2YjPjvE(v;t1;t2;y), whereE(v;t1;t2;y) =Xp2Pt1pypY1+t2pypY2+X(p;q)2Et1pqypY1yqY1+t2pqypY2yqY2+vpq(Y1;Y2)ypY1yqY2:Figure 2a shows an example of fusing two parse trees to create a new parse tree. Although fusionrequires finding the optimal labeling from an exponentially large set, the energy is submodular andcan be efficiently optimized with a single graph cut. All proofs are presented in the appendix.Proposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Once a parse tree has been constructed, I NFER SSPN then improves that parse tree on subsequentiterations. The following result shows how I NFER SSPN can improve a parse tree while ensuringthat the energy of that parse tree never gets worse.Lemma 1. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvement5Under review as a conference paper at ICLR 2017inE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Finally, it will be useful to define the union t=t1[t2of two parse trees t1;t2that have the sameproduction at their root but are over disjoint regions P1\P 2=;, as the parse tree twith regionP=P1[P 2and in which all nodes that co-occur in both t1andt2(i.e., have the same path to themfrom the root and have the same production) are merged to form a single node in t. In general, tmay be an inconsistent parse tree, as the same symbol may be parsed as two separate productions, inwhich case we define the energy of the boundary terms between the pixels parsed as these separateproductions to be infinite.3.2 I NFER SSPNPseudocode for our algorithm, I NFER SSPN, is presented in Algorithm 1. I NFER SSPN is an iterativebottom-up algorithm based on graph cuts (Kolmogorov & Zabih, 2004) that provably converges to alocal minimum of the energy function. In its first iteration, I NFER SSPN constructs a parse tree overthe full image for each production in the grammar. The parse of each terminal production is trivial toconstruct and simply labels each pixel as the terminal symbol. The parse for every other productionv:X!Y1Y2is constructed by choosing productions for Y1andY2and fusing their correspondingparse trees to get a parse of the image as X. Since the grammar is non-recursive, we can constructa directed acyclic graph (DAG) containing a node for each symbol and an edge from each symbolto each constituent of each production of that symbol and then traverse this graph from the leaves(terminals) to the root (start symbol), fusing the children of each production of each symbol whenwe visit that symbol’s node. Of course, to fuse parses of Y1andY2into a parse of X, we need tochoose which production of Y1(andY2) to fuse; this is done by simply choosing the production ofY1(andY2) that has the lowest energy over the current region. The best parse of the image, ^t, nowcorresponds to the lowest-energy parse of all productions of the start symbol.Further iterations of I NFER SSPN improve ^tin a flexible manner that allows any of its productionsor labelings to change, while also ensuring that its energy never increases. I NFER SSPN does this byagain computing parses of the full image for each production in the grammar. This time, however,when parsing a symbol X, INFER SSPN independently parses each region of the image that wasparsed as any production of Xin^t(none of these regions will overlap because the grammar is non-recursive) and then parses the remainder of the image given these parses of subregions of the image,meaning that the pixels in these other subregions are instantiated in the MRF but fixed to the labelsthat the subregion parses specify. The parse of the image as Xis then constructed as the union ofthese subregion parses. This procedure ensures that the energy will never increase (see Theorem 1and Lemma 1), but also that any subtree of ^tcan be replaced with another subtree if it results inlower energy. Figure 2b shows a simple example of updating a parse of a region as X!YZ.Further, this (re)parsing of subregions can again be achieved in a single bottom-up pass through thegrammar DAG, resulting in a very efficient algorithm for SSPN inference. This is because each pixelonly appears in at most one subregion for any symbol, and thus only ever needs to be parsed onceper production. See Algorithm 1 for more details.3.3 A NALYSISAs shown in Theorem 1, I NFER SSPN always converges to a local minimum of the energy func-tion. Similar to other graph-cut-based algorithms, such as -expansion (Boykov et al., 2001), I N-FERSSPN explores an exponentially large set of moves at each step, so the returned local minimumis much better than those returned by more local procedures, such as max-product belief propaga-tion. Further, we observe convergence within a few iterations in all experiments, with the majorityof the energy improvement occurring in the first iteration.Theorem 1. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t)andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.As shown in Proposition 2, each iteration of I NFER SSPN takes time O(jGjc(n)), wherenis thenumber of pixels in the image and c(n)is the complexity of the underlying graph cut algorithmused, which is low-order polynomial in the worst-case but nearly linear-time in practice (Boykov &Kolmogorov, 2004; Boykov et al., 2001).Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).6Under review as a conference paper at ICLR 2017Algorithm 1 Compute the (approximate) MAP assignment of the SSPN variables (i.e., the produc-tions and labelings) defined by an image and a grammar. This is equivalent to parsing the image.Input: The image x, a non-recursive grammar G= (N;;R;S; w), and (optional) input parse ^t.Output: A parse of the image, t, with energy E(t;x)E(^t;x).1:function INFER SSPN( x;G;^t)2:T;E empty lists of parse trees and energies, respectively, both of length jRj+jj3: foreach terminal Y2do4:T[Y] the trivial parse with all pixels parsed as Y5:E[Y] Pp2xw>YUp6: while the energy of any production of the start symbol Shas not converged do7: foreach symbol X2N, in reverse topological order do //as defined by the DAG of G8: foreach subtree ^tiof^trooted at a production uiwith headXdo9:Pi;yi the region that ^tiis over and its labeling in ^ti //fPigare all disjoint10: foreach production vj:X!Y1Y2do //iterate over all productions of X11: tij;eij FUSE(Pi;yi;vj;T) //parsePiasvjby fusing parses of Y1andY212:PX all pixels that are not in any region Pi13: foreach production vj:X!Y1Y2do //iterate over all productions of X14: yrand a random labeling of PX//use random for initialization15: tX;eX FUSE(PX;yrand;vj;T;([itij)) //parsePXasvjgiven ([itij)16: update lists: T[vj] ([itij)[tXandE[vj] Pieij+eXfor allvjwith headX17: ^t;^e the production of Swith the lowest energy in Eand its energy18: return ^t;^eInput: A regionP, a labeling yofP, a production v:X!Y1Y2, a list of parses T, and anoptional parse tPof pixels not inP, used to set pairwise terms of edges that are leaving P.Output: A parse tree rooted at vover regionPand the energy of that parse tree.1:function FUSE(P;y;v;T;tP)2: foreachYiwithi21;2do3:ui production of YiinTwith lowest energy over fp:yp=YiggiventP4: create submodular energy function E(v;y;P;x)onPfromT[u1],T[u2], andtP5:yv;ev (arg) min yE(v;y;P;x) //label each pixel in PasY1orY2using graph cuts6:tv combineT[u1]andT[u2]according to yvand appendvas the root7: returntv;evNote that a straightforward application of -expansion to image parsing that uses one label for everypossible parse in the grammar requires an exponential number of labels in general.INFER SSPN can be extended to productions with more than two constituents by simply replac-ing the internal graph cut used to fuse subtrees with a multi-label algorithm such as -expansion.INFER SSPN would still converge because each subtree would still never decrease in energy. Analgorithm such as QPBO (Kolmogorov & Rother, 2007) could also be used, which would allow thesubmodularity restriction to be relaxed. Finally, running I NFER SSPN on the grammar containingk1binary productions that results from converting a grammar with a single production on k>2constituents is equivalent to running -expansion on the kconstituents.4 E XPERIMENTSWe evaluated I NFER SSPN by parsing images from the Stanford background dataset (SBD) usinggrammars with generated structure and weights inferred from the pixel labels of the images weparsed. SBD is a standard semantic segmentation dataset containing images with an average size of320240pixels and a total of 8labels. The input features we used were from the Deeplab sys-tem (Chen et al., 2015; 2016) trained on the same images used for evaluation (note that we are notevaluating learning and thus use the same features for each algorithm and evaluate on the trainingdata in order to separate inference performance from generalization performance). We compared I N-FERSSPN to-expansion on a flat pairwise MRF and to max-product belief propagation (BP) on amulti-level (3-D) pairwise grid MRF. Details of these models are provided in the appendix. We note7Under review as a conference paper at ICLR 2017that the flat encoding for -expansion results in a label for each path in the grammar, where thereare an exponential number of such paths in the height of the grammar. However, once -expansionconverges, its energy is within a constant factor of the global minimum energy (Boykov et al., 2001)and thus serves as a good surrogate for the true global minimum, which is intractable to compute.We compared these algorithms by varying three different parameters: boundary strength (strength ofpairwise terms), grammar height, and number of productions per nonterminal. Each grammar usedfor testing contained a start symbol, multiple layers of nonterminals, and a final layer of nonterminalsin one-to-one correspondence with the eight terminal symbols, each of which had a single productionthat produces a region of pixels. The start symbol had one production for each pair of symbols inthe layer below it, and the last nonterminal layer (ignoring the nonterminals for the labels) hadproductions for each pair of labels, distributed uniformly over this last nonterminal layer.Boundary strength. Increasing the boundary strength of an MRF makes inference more challeng-ing, as individual pixel labels cannot be easily flipped without large side effects. To test this, weconstructed a grammar as above with 2layers of nonterminals (not including the start symbol), eachcontaining 3nonterminal symbols with 4binary productions to the next layer. We vary wBFvfor allvand plot the mean average pixel accuracy returned by each algorithm (the x-axis is log-scale) inFigure 3a. I NFER SSPN returns parses with almost identical accuracy (and energy) to -expansion.BP also returns comparable accuracies, but almost always returns invalid parses with infinite energy(if it converges at all) that contain multiple productions of the same object or a production of somesymbol Y even though a pixel is labeled as symbol X.0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPNFigure 3: The mean average pixel accuracy of the returned solution and total running time for each of beliefpropagation, -expansion, and I NFER SSPN when varying (a) boundary strength, (b) grammar height, and (c)number of productions. Each data point is the average value over (the same) 10images. Missing data pointsindicate out of memory errors. Figures 4, 5, and 6 in the appendix show all results for each experiment.Grammar height. In general, the number of paths in the grammar is exponential in its height, sothe height of the grammar controls the complexity of inference and thus the difficulty of parsingimages. For this experiment, we set the boundary scale factor to 10and constructed a grammar withfour nonterminals per layer, each with three binary productions to the next layer. Figure 3b showsthe effect of grammar height on total inference time (to convergence or a maximum number of iter-ations, whichever first occurred). As expected from Proposition 2, the time taken for I NFER SSPNscales linearly with the height of the grammar, which is within a constant factor of the size of thegrammar when all other parameters are fixed. Similarly, inference time for both -expansion and BPscaled exponentially with the height of the grammar because the number of labels for both increasescombinatorially. Again, the energies and corresponding accuracies achieved by I NFER SSPN werenearly identical to those of -expansion (see Figure 5 in the appendix).Productions per nonterminal. The number of paths in the grammar is also directly affected by thenumber of productions per symbol. For this experiment, we increased each pairwise term by a factorof10and constructed a grammar with 2layers of nonterminals, each with 4nonterminal symbols.Figure 3c shows the effect of increasing the number of productions per nonterminal, which againdemonstrates that I NFER SSPN is far more efficient than either -expansion or BP as the complexityof the grammar increases, while still finding comparable solutions (see Figure 6 in the appendix).5 C ONCLUSIONThis paper proposed submodular sum-product networks (SSPNs), a novel extension of sum-productnetworks that can be understood as an instantiation of an image grammar in which all possibleparses of an image over arbitrary shapes are represented. Despite this complexity, we presented8Under review as a conference paper at ICLR 2017INFER SSPN, a move-making algorithm that exploits submodularity in order to find the (approxi-mate) MAP state of an SSPN, which is equivalent to finding the (approximate) optimal parse of animage. Analytically, we showed that I NFER SSPN is both very efficient – each iteration takes timelinear in the size of the grammar and the complexity of one graph cut – and convergent. Empiri-cally, we showed that I NFER SSPN achieves accuracies and energies comparable to -expansion,which is guaranteed to return optima within a constant factor of the global optimum, while takingexponentially less time to do so.We have begun work on learning the structure and parameters of SSPNs from data. This is a particu-larly promising avenue of research because many recent works have demonstrated that learning boththe structure and parameters of sum-product networks from data is feasible and effective, despite thewell-known difficulty of grammar induction. We also plan to apply SSPNs to additional domains,such as activity recognition, social network modeling, and probabilistic knowledge bases.ACKNOWLEDGMENTSAF would like to thank Robert Gens and Rahul Kidambi for useful discussions and insights, andGena Barnabee for assisting with Figure 1 and for feedback on this document. This research waspartly funded by ONR grant N00014-16-1-2697 and AFRL contract FA8750-13-2-0019. The viewsand conclusions contained in this document are those of the authors and should not be interpretedas necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.REFERENCESRavindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. Network flows: theory, algorithmsand applications. Network , 1:864, 1993.Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algo-rithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and MachineIntelligence , 26(9):1124–1137, 2004.Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence , 23(11):1222–1239, 2001.Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Se-mantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. Pro-ceedings of the International Conference on Learning Representations , 2015. URL http://arxiv.org/abs/1412.7062 .Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille.DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution,and Fully Connected CRFs. In ArXiv e-prints , 2016. ISBN 9783901608353. URL http://arxiv.org/abs/1412.7062 .Noam Chomsky. On Certain Formal Properties of Grammars. Information and Control , 2:137–167,1959. ISSN 07745141.Rina Dechter and Robert Mateescu. AND/OR search spaces for graphical models. Artificial intelli-gence , 171:73–106, 2007.Robert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In Advancesin Neural Information Processing Systems , pp. 3239–3247, 2012. ISBN 9781627480031.Robert Gens and Pedro Domingos. Learning the structure of sum-product networks. In Proceedingsof the 30th International Conference on Machine Learning , pp. 873–880, 2013.Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and se-mantically consistent regions. In Proceedings of the IEEE International Conference on ComputerVision , pp. 1–8, 2009.9Under review as a conference paper at ICLR 2017D. M. Greig, B.T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binaryimages. Journal of the Royal Statistical Society. Series B (Methodological) , 51(2):271–279, 1989.John Hopcroft and Jeffrey Ullman. Introduction to Automata Theory, Languages, and Computation .Addison-Wesley, Reading MA, 1979.Daniel S. Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Nat-ural Language Processing, Computational Linguistics, and Speech Recognition . Prentice Hall,2000. ISBN 9780135041963. doi: 10.1162/089120100750105975.Vladimir Kolmogorov and Carsten Rother. Minimizing nonsubmodular functions with graph cuts -a review. IEEE transactions on pattern analysis and machine intelligence , 29(7):1274–9, 2007.ISSN 0162-8828. doi: 10.1109/TPAMI.2007.1031.Vladimir Kolmogorov and Ramin Zabih. What Energy Functions Can Be Minimized via GraphCuts? IEEE Transactions on Pattern Analysis and Machine Intelligence , 26(2):147–159, 2004.ISSN 01628828. doi: 10.1109/TPAMI.2004.1262177.Nikos Komodakis, Georgios Tziritas, and Nikos Paragios. Fast, approximately optimal solutionsfor single and dynamic MRFs. In Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition , 2007. ISBN 1424411807. doi: 10.1109/CVPR.2007.383095.Victor Lempitsky, Carsten Rother, Stefan Roth, and Andrew Blake. Fusion Moves for MarkovRandom Field Optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence ,32(8):1392–1405, 2010.Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. A Pylon Model for Semantic Segmen-tation. In Neural Information Processing Systems , number 228180, pp. 1–9, 2011.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In Proceed-ings of the 27th Conference on Uncertainty in Artificial Intelligence , pp. 337–346. AUAI Press,2011.Chris Russell, Lubor Ladick ́y, Pushmeet Kohli, and Philip H.S. Torr. Exact and Approximate In-ference in Associative Hierarchical Networks using Graph Cuts. The 26th Conference on Uncer-tainty in Artificial Intelligence , pp. 1–8, 2010.Abhishek Sharma, Oncel Tuzel, and Ming-Yu Liu. Recursive Context Propagation Network forSemantic Scene Labeling. In Advances in Neural Information Processing Systems , pp. 2447–2455, 2014.Jamie Shotton, John Winn, Carsten Rother, and Antonio Criminisi. TextonBoost: Joint Appear-ance, Shape and Conext Modeling for Muli-class object Recognition and Segmentation. Pro-ceedings European Conference on Computer Vision (ECCV) , 3951(Chapter 1):1–15, 2006. ISSN09205691.Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y . Ng. Parsing natural scenes and nat-ural language with recursive neural networks. In Proceedings of the 28th International Con-ference on Machine Learning , pp. 129–136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9.Yibiao Zhao and Song-Chun Zhu. Image Parsing via Stochastic Scene Grammar. In Advances inNeural Information Processing Systems , pp. 1–9, 2011.Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and Trendsin Computer Graphics and Vision , 2(4):259–362, 2006. ISSN 1572-2740. doi: 10.1561/0600000018.10Under review as a conference paper at ICLR 2017A P ROOFSProposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Proof.E(v;t1;t2)is submodular as long as 2vpq(Y1;Y2)t1pq+t2pq, which is true by construc-tion, sincevpq(yvp;yvq)cpq(ycp;ycq)forcany possible descendant of vand for all labelings.Lemma 2. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvementinE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Proof. Since the optimal fusion can be found exactly, and the energy of the current labeling yvhasimproved by , the optimal fusion will have improved by at least .Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).Proof. Letkbe the number of productions per nonterminal symbol and Nbe the nonterminals. Foreach nonterminal, F USE is calledktimes for each region and once for the remainder of the pixels.FUSE itself has complexity O(jPj+c(jPj) =O(c(jPj))when called with region P. However, inINFER SSPN each pixel is processed only once for each symbol because no regions overlap, so theworst-case complexity occurs when each symbol has only one region, and thus the total complexityof each iteration of I NFER SSPN isO(jNjkc(n)) =O(jGjc(n)).Theorem 2. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t), andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.Proof. We will prove by induction that for all nodes ni2^twith corresponding subtree ^ti, regionPi, production vi:X!Y1Y2and child subtrees ^t1;^t2, thatE(ti)E(^ti)after one iteration forallti=T[vi]\Pi. Since this holds for every production of Sover the image, this proves the claim.Base case. When ^tiis the subtree with region Piand production vi:X!Ycontaining only asingle terminal child, then by definition ti=T[vi]\Pi=^tibecause terminal parses do not changegiven the same region. Thus, E(ti) =E(^ti)and the claim holds.Induction step. Letvi:X!Y1Y2be the production for a node in ^tiwith subtrees ^t1;^t2overregionsP1;P2, respectively, such that P1[P 2=PiandP1\P 2=;, and suppose that for allproductions u1jwith headY1and all productions u2kwith headY2and corresponding parse treest1j=T[u1j]\P 1andt2k=T[u2k]\P 2, respectively, that E(t1j)E(^t1j)andE(t2k)E(^t2k).Now, when F USE is called on region P1it will choose the subtrees t1j:j= arg minjE(t1j;P1),andt2k:k= arg minkE(t2k;P2)and fuse these into t0ioverP. However, from Lemma 1, weknow thatticould at the very least simply reuse the labeling yvthat partitionsPintoP1;P2andin doing so return a tree t0iwith energy E(t0i)E(^ti), because each of its subtrees over their sameregions has lower (or equal) energy to those in ^t. Finally, since t0iis computed independently of anyother trees for region Pand then placed into T[vi]as a union of other trees, then ti=T[vi]\P=t0i,and the claim follows.B A DDITIONAL EXPERIMENTAL RESULTS AND DETAILSWe compared I NFER SSPN to running -expansion on a flat pairwise MRF and to max-product be-lief propagation over a multi-level (3-D) pairwise grid MRF. Each label of the flat MRF correspondsto a possible path in the grammar from the start symbol to a production to one of its constituentsymbols, etc, until reaching a terminal. In general, the number of such paths is exponential in theheight of the grammar. The unary terms are the sum of unary terms along the path and the pairwiseterm for a pair of labels is the pairwise term of the first production at which their constituents differ.For any two labels with paths that choose a different production of the same symbol (and have thesame path from the start symbol) we assign infinite cost to enforce the restriction that an object canonly have a single production of it into constituents. Note that after convergence -expansion is11Under review as a conference paper at ICLR 2017guaranteed to be within a constant factor of the global minimum energy (Boykov et al., 2001) andthus serves as a good surrogate for the true global minimum, which is intractable to compute. Themulti-layer MRF is constructed similarly. The number of levels in the MRF is equal to the heightof the DAG corresponding to the grammar used. The labels at a particular level of the MRF areall (production, constituent) pairs that can occur at this height in the grammar. The pairwise termbetween the same pixel in two levels is 0when the parent label’s constituent equals the child label’sproduction head, and 1otherwise. Pairwise terms within a layer are defined as in the flat MRF withinfinite cost for incompatible labels (i.e., two neighboring productions of the same symbol), unlesstwo copies of that nonterminal could be produced at that level by the grammar.All experiments were run on the same computer running an Intel Core i7-5960X with 8 cores and128MB of RAM. Each algorithm was limited to a single thread.0.1 0.3 1 3 10 30 100Boundary scale factor-8-7-6-5Minimum energy×105BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor50010001500Time (s)BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPNFigure 4: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying boundary strength.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).0 1 2 3 4 5 6Grammar height-8-6-4Minimum energy×105BPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN0 1 2 3 4 5 6Grammar height0.50.60.70.80.9AccuracyBPα-expSSPNFigure 5: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left). Low accuracies for grammar height 0are a result of the grammar being insufficiently expressive.1 2 3 4 5 6#productions per nonterminal-8-7-6-5Minimum energy×105BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal0.50.60.70.80.9AccuracyBPα-expSSPNFigure 6: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).12
Bksi_o-Eg
ryEGFD9gl
ICLR.cc/2017/conference/-/paper410/official/review
{"title": "Good paper, but not a good fit for ICLR", "rating": "4: Ok but not good enough - rejection", "review": "This paper develops Submodular Sum Product Networks (SSPNs) and\nan efficient inference algorithm for approximately computing the\nmost probable labeling of variables in the model. The main\napplication in the paper is on scene parsing. In this context,\nSSPNs define an energy function with a grammar component for\nrepresenting a hierarchy of labels and an MRF for encoding\nsmoothness of labels over space. To perform inference, the\nauthors develop a move-making algorithm, somewhat in the spirit\nof fusion moves (Lempitsky et al., 2010) that repeatedly improves\na solution by considering a large neighborhood of alternative segmentations\nand solving an optimization problem to choose the best neighbor.\nEmpirical results show that the proposed algorithm achieves better\nenergy that belief propagation of alpha expansion and is much faster.\n\nThis is generally a well-executed paper. The model is interesting\nand clearly defined, the algorithm is well presented with proper\nanalysis of the relevant runtimes and guarantees on the\nbehavior. Overall, the algorithm seems effective at minimizing\nthe energy of SSPN models.\n\nHaving said that, I don't think this paper is a great fit for\nICLR. The model is even somewhat to the antithesis of the idea of\nlearning representations, in that a highly structured form of\nenergy function is asserted by the human modeller, and then\ninference is performed. I don't see the connection to learning\nrepresentations. One additional issue is that while the proposed\nalgorithm is faster than alternatives, the times are still on the\norder of 1-287 seconds per image, which means that the\napplicability of this method (as is) to something like training\nConvNets is limited.\n\nFinally, there is no attempt to argue that the model produces\nbetter segmentations than alternative models. The only\nevaluations in the paper are on energy values achieved and on\ntraining data.\n\nSo overall I think this is a good paper that should be published\nat a good machine learning conference, but I don't think ICLR is\nthe right fit.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Submodular Sum-product Networks for Scene Understanding
["Abram L. Friesen", "Pedro Domingos"]
Sum-product networks (SPNs) are an expressive class of deep probabilistic models in which inference takes time linear in their size, enabling them to be learned effectively. However, for certain challenging problems, such as scene understanding, the corresponding SPN has exponential size and is thus intractable. In this work, we introduce submodular sum-product networks (SSPNs), an extension of SPNs in which sum-node weights are defined by a submodular energy function. SSPNs combine the expressivity and depth of SPNs with the ability to efficiently compute the MAP state of a combinatorial number of labelings afforded by submodular energies. SSPNs for scene understanding can be understood as representing all possible parses of an image over arbitrary region shapes with respect to an image grammar. Despite this complexity, we develop an efficient and convergent algorithm based on graph cuts for computing the (approximate) MAP state of an SSPN, greatly increasing the expressivity of the SPN model class. Empirically, we show exponential improvements in parsing time compared to traditional inference algorithms such as alpha-expansion and belief propagation, while returning comparable minima.
["Computer vision", "Structured prediction"]
https://openreview.net/forum?id=ryEGFD9gl
https://openreview.net/pdf?id=ryEGFD9gl
https://openreview.net/forum?id=ryEGFD9gl&noteId=Bksi_o-Eg
Under review as a conference paper at ICLR 2017SUBMODULAR SUM-PRODUCT NETWORKSFOR SCENE UNDERSTANDINGAbram L. Friesen & Pedro DomingosDepartment of Computer Science and EngineeringUniversity of WashingtonSeattle, WA 98195, USAfafriesen,pedrod g@cs.washington.eduABSTRACTSum-product networks (SPNs) are an expressive class of deep probabilisticmodels in which inference takes time linear in their size, enabling them tobe learned effectively. However, for certain challenging problems, such asscene understanding, the corresponding SPN has exponential size and is thusintractable. In this work, we introduce submodular sum-product networks(SSPNs), an extension of SPNs in which sum-node weights are defined by asubmodular energy function. SSPNs combine the expressivity and depth of SPNswith the ability to efficiently compute the MAP state of a combinatorial numberof labelings afforded by submodular energies. SSPNs for scene understandingcan be understood as representing all possible parses of an image over arbitraryregion shapes with respect to an image grammar. Despite this complexity, wedevelop an efficient and convergent algorithm based on graph cuts for computingthe (approximate) MAP state of an SSPN, greatly increasing the expressivity ofthe SPN model class. Empirically, we show exponential improvements in parsingtime compared to traditional inference algorithms such as -expansion and beliefpropagation, while returning comparable minima.1 I NTRODUCTIONSum-product networks (SPNs) (Poon & Domingos, 2011; Gens & Domingos, 2012) are a class ofdeep probabilistic models that consist of many layers of hidden variables and can have unboundedtreewidth. Despite this depth and corresponding expressivity, exact inference in SPNs is guaranteedto take time linear in their size, allowing their structure and parameters to be learned effectivelyfrom data. However, there are still many models for which the corresponding SPN has size expo-nential in the number of variables and is thus intractable. For example, in scene understanding (orsemantic segmentation), the goal is to label each pixel of an image with its semantic class, whichrequires simultaneously detecting, segmenting, and recognizing each object in the scene. Even thesimplest SPN for scene understanding is intractable, as it must represent the exponentially large setof segmentations of the image into its constituent objects.Scene understanding is commonly formulated as a flat Markov (or conditional) random field (MRF)over the pixels or superpixels of an image (e.g., Shotton et al. (2006); Gould et al. (2009)). Inferencein MRFs is intractable in general; however, there exist restrictions of the MRF that enable tractableinference. For pairwise binary MRFs, if the energy of each pairwise term is submodular (alterna-tively, attractive or regular) (Kolmogorov & Zabih, 2004), meaning that each pair of neighboringpixels prefers to have the same label, then the exact MAP labeling of the MRF can be recovered inlow-order polynomial time through the use of a graph cut algorithm1(Greig et al., 1989; Boykov &Kolmogorov, 2004). This result from the binary case has been used to develop a number of power-ful approximate algorithms for the multi-label case (e.g., Komodakis et al. (2007); Lempitsky et al.(2010)), the most well-known of which is -expansion (Boykov et al., 2001), which efficiently re-turns an approximate labeling that is within a constant factor of the true optimum by solving a seriesof binary graph cut problems. Unfortunately, pairwise MRFs are insufficiently expressive for com-1Formally, a min-cut/max-flow algorithm(Ahuja et al., 1993) on a graph constructed from the MRF.1Under review as a conference paper at ICLR 2017plex tasks such as scene understanding, as they are unable to model high-level relationships, such asconstituency (part-subpart) or subcategorization (superclass-subclass), between arbitrary regions ofthe image, unless these can be encoded in the labels of the MRF and enforced between pairs of (su-per)pixels. However, this encoding requires a combinatorial number of labels, which is intractable.Instead, higher-level structure is needed to efficiently represent these relationships.In this paper, we present submodular sum-product networks (SSPNs), a novel model that combinesthe expressive power of sum-product networks with the tractable segmentation properties of sub-modular energies. An SSPN is a sum-product network in which the weight of each child of a sumnode corresponds to the energy of a particular labeling of a submodular energy function. Equiva-lently, an SSPN over an image corresponds to an instantiation of all possible parse trees of that imagewith respect to a given image grammar, where the probability distribution over the segmentations ofa production on a particular region is defined by a submodular random field over the pixels in thatregion. Importantly, SSPNs permit objects and regions to take arbitrary shapes , instead of restrict-ing the set of possible shapes as has previously been necessary for tractable inference. By exploitingsubmodularity, we develop a highly-efficient approximate inference algorithm, I NFER SSPN, forcomputing the MAP state of the SSPN (equivalently, the optimal parse of the image). I NFER SSPNis an iterative move-making-style algorithm that provably converges to a local minimum of the en-ergy, reduces to -expansion in the case of a trivial grammar, and has complexity O(jGjc(n))foreach iteration, where c(n)is the complexity of a single graph cut and jGjis the size of the grammar.As with other move-making algorithms, I NFER SSPN converges to a local minimum with respectto an exponentially-large set of neighbors, overcoming many of the main issues of local minima(Boykov et al., 2001). Empirically, we compare I NFER SSPN to belief propagation (BP) on a multi-level MRF and to -expansion on an equivalent flat MRF. We show that I NFER SSPN parses imagesin exponentially less time than both of these while returning energies comparable to -expansion,which is guaranteed to return energies within a constant factor of the true optimum.The literature on using higher-level information for scene understanding is vast. We briefly dis-cuss the most relevant work on hierarchical random fields over multiple labels, image grammars forsegmentation, and neural parsing methods. Hierarchical random field models (e.g., Russell et al.(2010); Lempitsky et al. (2011)) define MRFs with multiple layers of hidden variables and thenperform inference, often using graph cuts to efficiently extract the MAP solution. However, thesemodels are typically restricted to just a few layers and to pre-computed segmentations of the image,and thus do not allow arbitrary region shapes. In addition, they require a combinatorial number oflabels to encode complex grammar structures. Previous grammar-based methods for scene under-standing, such as Zhu & Mumford (2006) and Zhao & Zhu (2011), have used MRFs with AND-ORgraphs (Dechter & Mateescu, 2007), but needed to restrict their grammars to a very limited set ofproductions and region shapes in order to perform inference in reasonable time, and are thus muchless expressive than SSPNs. Finally, neural parsing methods such as those in Socher et al. (2011)and Sharma et al. (2014) use recursive neural network architectures over superpixel-based featuresto segment an image; thus, these methods also do not allow arbitrary region shapes. Further, Socheret al. (2011) greedily combine regions to form parse trees, while (Sharma et al., 2014) use randomlygenerated parse trees, whereas inference in SSPNs finds the (approximately) optimal parse tree.2 S UBMODULAR SUM -PRODUCT NETWORKSIn the following, we define submodular sum-product networks (SSPNs) in terms of an image gram-mar because this simplifies the exposition with respect to the structure of the sum-product network(SPN) and because scene understanding is the domain we use to evaluate SSPNs. However, it is notnecessary to define SSPNs in this way, and our results extend to any SPN with sum-node weightsdefined by a random field with submodular potentials. Due to lack of space we refer readers to Gens& Domingos (2012), Poon & Domingos (2011) and Gens & Domingos (2013) for SPN details.With respect to scene understanding, an SSPN defines a generative model of an image and a hierar-chy of regions within that image where each region is labeled with a production (and implicitly bythe head symbol of that production), can have arbitrary shape, and is a subset of the region of eachof its ancestors. An example of an SSPN for parsing a farm scene is shown in Figure 1. Given astarting symbol and the region containing the entire image, the generative process is to first choose aproduction of that symbol into its constituent symbols and then choose a segmentation of the regioninto a set of mutually exclusive and exhaustive subregions, with one subregion per constituent sym-2Under review as a conference paper at ICLR 2017Figure 1: A partial (submodular) sum-product network for parsing an image with respect to the grammarshown. There is a sum node for each nonterminal symbol with a child sum node for each production of thatsymbol. Each sum node for a production has a child product nodefor each possible segmentation of its region.bol. The process then recurses, choosing a production and a segmentation for each subregion givenits symbol. The recursion terminates when one of the constituents is a terminal symbol, at whichpoint the pixels corresponding to that region of the image are generated. This produces a parse treein which each internal node is a pair containing a region and a production of the region, and theleaves are regions of pixels. For each node in a parse tree, the regions of its children are mutuallyexclusive and exhaustive with respect to the parent node’s region. As in a probabilistic context-freegrammar (PCFG) (Jurafsky & Martin, 2000), productions are chosen from a categorical distributionover the productions of the current symbol. Segmentations of a given region, however, are sampledfrom a (submodular) Markov random field (MRF) over the pixels in the region.Formally, let G= (N;;R;S; w)be a non-recursive stochastic grammar, where Nis a finiteset of nonterminal symbols; is a finite set of terminal symbols; Ris a finite set of productionsR=fv:X!Y1Y2:::Y kgwith head symbol X2Nand constituent symbols Yi2N[fori= 1:::k andk >0;S2Nis a distinguished start symbol, meaning that it does not appearon the right-hand side of any production; and ware the weights that parameterize the probabilitydistribution defined by G. For a production v2tin a parse tree t2TG, we denote its regionasPvand its parent and children as pa (v)and ch (v), respectively, where TGis the set of possibleparse trees under the grammar G. The labeling corresponding to the segmentation of the pixelsinPvfor production v:X!Y1:::Y kisyv2YjPvjv, whereYv=fY1;:::;Y kg. The regionof any production v2tis the set of pixels in Ppa(v)whose assigned label is the head of v, i.e.,Pv=fp2P pa(v):ypa(v)p =head(v)g, except for the production of the start symbol, which hasthe entire image as its region. The probability of an image xispw(x) =Pt2TGpw(t;x), wherethe joint probability of parse tree tand the image is the product over all productions in tof theprobability of choosing that production vand then segmenting its region Pvaccording to yv:pw(t;x) =1Zexp(Ew(t;x)) =1Zexp(Xv2tEvw(v;yv;head(v);Pv;x)):Here,Z=Pt2TGexp(Ew(t;x))is the partition function, ware the model parameters, and Eisthe energy function. In the following, we will simplify notation by omitting head (v),Pv,x,w, andsuperscriptvfrom the energy function when they are clear from context. The energy of a productionand its segmentation on the region Pvare given by a pairwise Markov random field (MRF) asE(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w);wherevpandvpqare the unary andpairwise costs of the segmentation MRF, fyvp:p2Pvgis the labeling defining the segmentation ofthe pixels in the current region, and Evare the edges inPv. Without loss of generality we assumethatEvcontains only one of (p;q)or(q;p), since the two terms can always be combined. Here, vpis the per-pixel data cost and vpqis the boundary term, which penalizes adjacent pixels within thesame region that have different labels. We describe these terms in more detail below. In general,even computing the segmentation for a single production is intractable. In order to permit efficientinference, we require that vpqsatisfies the submodularity condition vpq(Y1;Y1) +vpq(Y2;Y2)vpq(Y1;Y2) +vpq(Y2;Y1)for all productions v:X!Y1Y2once the grammar has been convertedto a grammar in which each production has only two constituents, which is always possible andin the worst case increases the grammar size quadratically (Jurafsky & Martin, 2000; Chomsky,3Under review as a conference paper at ICLR 20171959). We also require for every production v2Rand for every production cthat is a descendantofvin the grammar that vpq(yvp;yvq)cpq(ycp;ycq)for all possible labelings (yvp;yvq;ycp;ycq), whereyvp;yvq2Yvandycp;ycq2Yc. This condition ensures that segmentations for higher-level productionsare submodular, no matter what occurs below them. It also encodes the reasonable assumption thathigher-level abstractions are separated by stronger, shorter boundaries (relative to their size), whilelower-level objects are more likely to be composed of smaller, more intricately-shaped regions.The above model defines a sum-product network containing a sum node for each possible region ofeach nonterminal, a product node for each segmentation of each production of each possible regionof each nonterminal, and a leaf function on the pixels of the image for each possible region of theimage for each terminal symbol. The children of the sum node sfor nonterminal Xswith regionPsare all product nodes rwith a production vr:Xs!Y1:::Y kand regionPvr=Ps. Eachproduct node corresponds to a labeling yvrofPvrand the edge to its parent sum node has weightexp(E(v;yvr;Pvr)). The children of product node rare the sum or leaf nodes with matchingregions that correspond to the constituent nonterminals or terminals of vr, respectively. Since theweights of the edges from a sum node to its children correspond to submodular energy functions,we call this a submodular sum-product network (SSPN).A key benefit of SSPNs in comparison to previous grammar-based approaches is that regions canhave arbitrary shapes and are not restricted to a small class of shapes such as rectangles (Poon &Domingos, 2011; Zhao & Zhu, 2011). This flexibility is important when parsing images, as real-world objects and abstractions can take any shape, but it comes with a combinatorial explosion ofpossible parses. However, by exploiting submodularity, we are able to develop an efficient inferencealgorithm for SSPNs, allowing us to efficiently parse images into a hierarchy of arbitrarily-shapedregions and objects, yielding a very expressive model class. This efficiency is despite the size of theunderlying SSPN, which is in general far too large to explicitly instantiate.2.1 MRF SEGMENTATION DETAILSAs discussed above, the energy of each segmentation of a region for a given production is defined bya submodular MRF E(v;yv) =Pp2Pvvp(yvp;w) +P(p;q)2Evvpq(yvp;yvq;w):The unary terms inE(v;yv)differ depending on whether the label yvpcorresponds to a terminal or nonterminal symbol.For a terminal T2, the unary terms are a linear function of the image features vp(yvp=T;w) =wPCv+w>TUp, wherewPCvis an element of wthat specifies the cost of vrelative to other productionsandUpis a feature vector representing the local appearance of pixel p. In our experiments, Upis theoutput of a deep neural network. For labels corresponding to a nonterminal X2N, the unary termsarevp(yvp=X;w) =wPCv+cp(ycp), wherecis the child production of vin the current parse treethat contains p, such thatp2Pc. This dependence makes inference challenging, because the choiceof children in the parse tree itself depends on the region that is being parsed as X, which dependson the segmentation this unary is being used to compute.The pairwise terms in E(v;yv)are a recursive version of the standard contrast-dependent pairwiseboundary potential (e.g., Shotton et al. (2006)) defined for each production vand each pair of adja-cent pixelsp;qasvpq(yvp;yvq;w) =wBFvexp(1jjBpBqjj2)[yvp6=yvq]+cpq(ycp;ycq;w), whereis half the average image contrast between all adjacent pixels in an image, wBFvis the boundaryfactor that controls the relative cost of this term for each production, Bpis the pairwise per-pixelfeature vector, cis the same as in the unary term above, and []is the indicator function, which hasvalue 1when its argument is true and is 0otherwise. For each pair of pixels (p;q), only one suchterm will ever be non-zero, because once two pixels are labeled differently at a node in the parsetree, they are placed in separate subtrees and thus never co-occur in any region below the currentnode. In our experiments, Bpare the intensity values for each pixel.3 I NFERENCEScene understanding (or semantic segmentation) requires labeling each pixel of an image with itssemantic class. By constructing a grammar containing a set of nonterminals in one-to-one corre-spondence with the semantic labels and only allowing these symbols to produce terminals, we canrecover the semantic segmentation of an image from a parse tree for this grammar. In the simplestcase, a grammar need contain only one additional production from the start symbol to all othernonterminals. More generally, however, the grammar encodes rich structure about the relationships4Under review as a conference paper at ICLR 2017To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA(a)To improve parse of 1. (re)parse as Y 2. (re)parse as Y given 3. (re)parse as Z 4. (re)parse as Z given 5. fuse with ××××YZABCDfuseX➞YZ×ABCDYZ××CDAB××CBYZZYYYZZEFHGEFGHABCD- confusing part: not clear that X->Y->AB in subregion of LHS figure is just sub-selecting from existing parse of Y->AB over entire region - need to explain clearly what’s happening...DA (b)Figure 2: The two main components of I NFER SSPN: (a) Parsing a region PasX!YZ by fusing twoparses of PasY!ABand asZ!CD, and (b) Improving the parse of PasX!YZby (re)parsing eachof its subregions, taking the union of the new YandZparses of P, and then fusing these new parses.between image regions at various levels of abstraction, including concepts such as composition andsubcategorization. Identifying the relevant structure and relationships for a particular image entailsfinding the best parse of an image xgiven a grammar G(or, equivalently, performing MAP inferencein the corresponding SSPN), i.e., t= arg maxt2TGp(tjx) = arg mint2TGPv2tE(v;yv;x).In PCFGs over sentences (Jurafsky & Martin, 2000), the optimal parse can be recovered exactly intimeO(n3jGj)with the CYK algorithm (Hopcroft & Ullman, 1979), where nis the length of the sen-tence andjGjis the number of productions in the grammar, by iterating over all possible split pointsof the sentence and using dynamic programming to avoid recomputing sub-parses. Unfortunately,for images and other 2-D data types, there are 2npossible segmentations of the data for each binaryproduction, rendering this approach infeasible in general. With an SSPN, however, it is possible toefficiently compute the approximate optimal parse of an image. In our algorithm, I NFER SSPN, thisis done by iteratively constructing parses of different regions in a bottom-up fashion.3.1 P ARSE TREE CONSTRUCTIONGiven a production v:X!Y1Y2and two parse trees t1;t2over the same region Pand with headsymbolsY1;Y2, respectively, then for any labeling yv2fY1;Y2gjPjofPwe can construct a thirdparse treetXover regionPwith root production v, labeling yv, and subtrees t01;t02over regionsP1;P2, respectively, such that Pi=fp2P :yvp=Yigandt0i=ti\Pifor eachi, where theintersection of a parse tree and a region t\P is the new parse tree resulting from intersecting Pwith the region at each node in t. Of course, the quality of the resulting parse tree, tX, dependson the particular labeling (segmentation) yvused. Recall that a parse tree ton regionPhas energyE(t;P) =Pv2tE(v;yv;Pv), which can be written as E(t;P) =Pp2Ptp+P(p;q)2Etpq, wheretp=Pv2tvp(yvp)[p2Pv]andtpq=Pv2tvpq(yvp;yvq)[(p;q)2Ev]. This allows us to definethefusion operation, which is a key subroutine in I NFER SSPN. Note that ijis the Kronecker delta.Definition 1. For a production v:X!Y1;Y2and two parse trees t1;t2over regionPwith headsymbolsY1;Y2thentXis the fusion oft1andt2constructed from the minimum energy labelingyv= arg miny2YjPjvE(v;t1;t2;y), whereE(v;t1;t2;y) =Xp2Pt1pypY1+t2pypY2+X(p;q)2Et1pqypY1yqY1+t2pqypY2yqY2+vpq(Y1;Y2)ypY1yqY2:Figure 2a shows an example of fusing two parse trees to create a new parse tree. Although fusionrequires finding the optimal labeling from an exponentially large set, the energy is submodular andcan be efficiently optimized with a single graph cut. All proofs are presented in the appendix.Proposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Once a parse tree has been constructed, I NFER SSPN then improves that parse tree on subsequentiterations. The following result shows how I NFER SSPN can improve a parse tree while ensuringthat the energy of that parse tree never gets worse.Lemma 1. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvement5Under review as a conference paper at ICLR 2017inE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Finally, it will be useful to define the union t=t1[t2of two parse trees t1;t2that have the sameproduction at their root but are over disjoint regions P1\P 2=;, as the parse tree twith regionP=P1[P 2and in which all nodes that co-occur in both t1andt2(i.e., have the same path to themfrom the root and have the same production) are merged to form a single node in t. In general, tmay be an inconsistent parse tree, as the same symbol may be parsed as two separate productions, inwhich case we define the energy of the boundary terms between the pixels parsed as these separateproductions to be infinite.3.2 I NFER SSPNPseudocode for our algorithm, I NFER SSPN, is presented in Algorithm 1. I NFER SSPN is an iterativebottom-up algorithm based on graph cuts (Kolmogorov & Zabih, 2004) that provably converges to alocal minimum of the energy function. In its first iteration, I NFER SSPN constructs a parse tree overthe full image for each production in the grammar. The parse of each terminal production is trivial toconstruct and simply labels each pixel as the terminal symbol. The parse for every other productionv:X!Y1Y2is constructed by choosing productions for Y1andY2and fusing their correspondingparse trees to get a parse of the image as X. Since the grammar is non-recursive, we can constructa directed acyclic graph (DAG) containing a node for each symbol and an edge from each symbolto each constituent of each production of that symbol and then traverse this graph from the leaves(terminals) to the root (start symbol), fusing the children of each production of each symbol whenwe visit that symbol’s node. Of course, to fuse parses of Y1andY2into a parse of X, we need tochoose which production of Y1(andY2) to fuse; this is done by simply choosing the production ofY1(andY2) that has the lowest energy over the current region. The best parse of the image, ^t, nowcorresponds to the lowest-energy parse of all productions of the start symbol.Further iterations of I NFER SSPN improve ^tin a flexible manner that allows any of its productionsor labelings to change, while also ensuring that its energy never increases. I NFER SSPN does this byagain computing parses of the full image for each production in the grammar. This time, however,when parsing a symbol X, INFER SSPN independently parses each region of the image that wasparsed as any production of Xin^t(none of these regions will overlap because the grammar is non-recursive) and then parses the remainder of the image given these parses of subregions of the image,meaning that the pixels in these other subregions are instantiated in the MRF but fixed to the labelsthat the subregion parses specify. The parse of the image as Xis then constructed as the union ofthese subregion parses. This procedure ensures that the energy will never increase (see Theorem 1and Lemma 1), but also that any subtree of ^tcan be replaced with another subtree if it results inlower energy. Figure 2b shows a simple example of updating a parse of a region as X!YZ.Further, this (re)parsing of subregions can again be achieved in a single bottom-up pass through thegrammar DAG, resulting in a very efficient algorithm for SSPN inference. This is because each pixelonly appears in at most one subregion for any symbol, and thus only ever needs to be parsed onceper production. See Algorithm 1 for more details.3.3 A NALYSISAs shown in Theorem 1, I NFER SSPN always converges to a local minimum of the energy func-tion. Similar to other graph-cut-based algorithms, such as -expansion (Boykov et al., 2001), I N-FERSSPN explores an exponentially large set of moves at each step, so the returned local minimumis much better than those returned by more local procedures, such as max-product belief propaga-tion. Further, we observe convergence within a few iterations in all experiments, with the majorityof the energy improvement occurring in the first iteration.Theorem 1. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t)andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.As shown in Proposition 2, each iteration of I NFER SSPN takes time O(jGjc(n)), wherenis thenumber of pixels in the image and c(n)is the complexity of the underlying graph cut algorithmused, which is low-order polynomial in the worst-case but nearly linear-time in practice (Boykov &Kolmogorov, 2004; Boykov et al., 2001).Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).6Under review as a conference paper at ICLR 2017Algorithm 1 Compute the (approximate) MAP assignment of the SSPN variables (i.e., the produc-tions and labelings) defined by an image and a grammar. This is equivalent to parsing the image.Input: The image x, a non-recursive grammar G= (N;;R;S; w), and (optional) input parse ^t.Output: A parse of the image, t, with energy E(t;x)E(^t;x).1:function INFER SSPN( x;G;^t)2:T;E empty lists of parse trees and energies, respectively, both of length jRj+jj3: foreach terminal Y2do4:T[Y] the trivial parse with all pixels parsed as Y5:E[Y] Pp2xw>YUp6: while the energy of any production of the start symbol Shas not converged do7: foreach symbol X2N, in reverse topological order do //as defined by the DAG of G8: foreach subtree ^tiof^trooted at a production uiwith headXdo9:Pi;yi the region that ^tiis over and its labeling in ^ti //fPigare all disjoint10: foreach production vj:X!Y1Y2do //iterate over all productions of X11: tij;eij FUSE(Pi;yi;vj;T) //parsePiasvjby fusing parses of Y1andY212:PX all pixels that are not in any region Pi13: foreach production vj:X!Y1Y2do //iterate over all productions of X14: yrand a random labeling of PX//use random for initialization15: tX;eX FUSE(PX;yrand;vj;T;([itij)) //parsePXasvjgiven ([itij)16: update lists: T[vj] ([itij)[tXandE[vj] Pieij+eXfor allvjwith headX17: ^t;^e the production of Swith the lowest energy in Eand its energy18: return ^t;^eInput: A regionP, a labeling yofP, a production v:X!Y1Y2, a list of parses T, and anoptional parse tPof pixels not inP, used to set pairwise terms of edges that are leaving P.Output: A parse tree rooted at vover regionPand the energy of that parse tree.1:function FUSE(P;y;v;T;tP)2: foreachYiwithi21;2do3:ui production of YiinTwith lowest energy over fp:yp=YiggiventP4: create submodular energy function E(v;y;P;x)onPfromT[u1],T[u2], andtP5:yv;ev (arg) min yE(v;y;P;x) //label each pixel in PasY1orY2using graph cuts6:tv combineT[u1]andT[u2]according to yvand appendvas the root7: returntv;evNote that a straightforward application of -expansion to image parsing that uses one label for everypossible parse in the grammar requires an exponential number of labels in general.INFER SSPN can be extended to productions with more than two constituents by simply replac-ing the internal graph cut used to fuse subtrees with a multi-label algorithm such as -expansion.INFER SSPN would still converge because each subtree would still never decrease in energy. Analgorithm such as QPBO (Kolmogorov & Rother, 2007) could also be used, which would allow thesubmodularity restriction to be relaxed. Finally, running I NFER SSPN on the grammar containingk1binary productions that results from converting a grammar with a single production on k>2constituents is equivalent to running -expansion on the kconstituents.4 E XPERIMENTSWe evaluated I NFER SSPN by parsing images from the Stanford background dataset (SBD) usinggrammars with generated structure and weights inferred from the pixel labels of the images weparsed. SBD is a standard semantic segmentation dataset containing images with an average size of320240pixels and a total of 8labels. The input features we used were from the Deeplab sys-tem (Chen et al., 2015; 2016) trained on the same images used for evaluation (note that we are notevaluating learning and thus use the same features for each algorithm and evaluate on the trainingdata in order to separate inference performance from generalization performance). We compared I N-FERSSPN to-expansion on a flat pairwise MRF and to max-product belief propagation (BP) on amulti-level (3-D) pairwise grid MRF. Details of these models are provided in the appendix. We note7Under review as a conference paper at ICLR 2017that the flat encoding for -expansion results in a label for each path in the grammar, where thereare an exponential number of such paths in the height of the grammar. However, once -expansionconverges, its energy is within a constant factor of the global minimum energy (Boykov et al., 2001)and thus serves as a good surrogate for the true global minimum, which is intractable to compute.We compared these algorithms by varying three different parameters: boundary strength (strength ofpairwise terms), grammar height, and number of productions per nonterminal. Each grammar usedfor testing contained a start symbol, multiple layers of nonterminals, and a final layer of nonterminalsin one-to-one correspondence with the eight terminal symbols, each of which had a single productionthat produces a region of pixels. The start symbol had one production for each pair of symbols inthe layer below it, and the last nonterminal layer (ignoring the nonterminals for the labels) hadproductions for each pair of labels, distributed uniformly over this last nonterminal layer.Boundary strength. Increasing the boundary strength of an MRF makes inference more challeng-ing, as individual pixel labels cannot be easily flipped without large side effects. To test this, weconstructed a grammar as above with 2layers of nonterminals (not including the start symbol), eachcontaining 3nonterminal symbols with 4binary productions to the next layer. We vary wBFvfor allvand plot the mean average pixel accuracy returned by each algorithm (the x-axis is log-scale) inFigure 3a. I NFER SSPN returns parses with almost identical accuracy (and energy) to -expansion.BP also returns comparable accuracies, but almost always returns invalid parses with infinite energy(if it converges at all) that contain multiple productions of the same object or a production of somesymbol Y even though a pixel is labeled as symbol X.0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPNFigure 3: The mean average pixel accuracy of the returned solution and total running time for each of beliefpropagation, -expansion, and I NFER SSPN when varying (a) boundary strength, (b) grammar height, and (c)number of productions. Each data point is the average value over (the same) 10images. Missing data pointsindicate out of memory errors. Figures 4, 5, and 6 in the appendix show all results for each experiment.Grammar height. In general, the number of paths in the grammar is exponential in its height, sothe height of the grammar controls the complexity of inference and thus the difficulty of parsingimages. For this experiment, we set the boundary scale factor to 10and constructed a grammar withfour nonterminals per layer, each with three binary productions to the next layer. Figure 3b showsthe effect of grammar height on total inference time (to convergence or a maximum number of iter-ations, whichever first occurred). As expected from Proposition 2, the time taken for I NFER SSPNscales linearly with the height of the grammar, which is within a constant factor of the size of thegrammar when all other parameters are fixed. Similarly, inference time for both -expansion and BPscaled exponentially with the height of the grammar because the number of labels for both increasescombinatorially. Again, the energies and corresponding accuracies achieved by I NFER SSPN werenearly identical to those of -expansion (see Figure 5 in the appendix).Productions per nonterminal. The number of paths in the grammar is also directly affected by thenumber of productions per symbol. For this experiment, we increased each pairwise term by a factorof10and constructed a grammar with 2layers of nonterminals, each with 4nonterminal symbols.Figure 3c shows the effect of increasing the number of productions per nonterminal, which againdemonstrates that I NFER SSPN is far more efficient than either -expansion or BP as the complexityof the grammar increases, while still finding comparable solutions (see Figure 6 in the appendix).5 C ONCLUSIONThis paper proposed submodular sum-product networks (SSPNs), a novel extension of sum-productnetworks that can be understood as an instantiation of an image grammar in which all possibleparses of an image over arbitrary shapes are represented. Despite this complexity, we presented8Under review as a conference paper at ICLR 2017INFER SSPN, a move-making algorithm that exploits submodularity in order to find the (approxi-mate) MAP state of an SSPN, which is equivalent to finding the (approximate) optimal parse of animage. Analytically, we showed that I NFER SSPN is both very efficient – each iteration takes timelinear in the size of the grammar and the complexity of one graph cut – and convergent. Empiri-cally, we showed that I NFER SSPN achieves accuracies and energies comparable to -expansion,which is guaranteed to return optima within a constant factor of the global optimum, while takingexponentially less time to do so.We have begun work on learning the structure and parameters of SSPNs from data. This is a particu-larly promising avenue of research because many recent works have demonstrated that learning boththe structure and parameters of sum-product networks from data is feasible and effective, despite thewell-known difficulty of grammar induction. We also plan to apply SSPNs to additional domains,such as activity recognition, social network modeling, and probabilistic knowledge bases.ACKNOWLEDGMENTSAF would like to thank Robert Gens and Rahul Kidambi for useful discussions and insights, andGena Barnabee for assisting with Figure 1 and for feedback on this document. This research waspartly funded by ONR grant N00014-16-1-2697 and AFRL contract FA8750-13-2-0019. The viewsand conclusions contained in this document are those of the authors and should not be interpretedas necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or theUnited States Government.REFERENCESRavindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. Network flows: theory, algorithmsand applications. Network , 1:864, 1993.Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algo-rithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and MachineIntelligence , 26(9):1124–1137, 2004.Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence , 23(11):1222–1239, 2001.Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Se-mantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. Pro-ceedings of the International Conference on Learning Representations , 2015. URL http://arxiv.org/abs/1412.7062 .Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille.DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution,and Fully Connected CRFs. In ArXiv e-prints , 2016. ISBN 9783901608353. URL http://arxiv.org/abs/1412.7062 .Noam Chomsky. On Certain Formal Properties of Grammars. Information and Control , 2:137–167,1959. ISSN 07745141.Rina Dechter and Robert Mateescu. AND/OR search spaces for graphical models. Artificial intelli-gence , 171:73–106, 2007.Robert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In Advancesin Neural Information Processing Systems , pp. 3239–3247, 2012. ISBN 9781627480031.Robert Gens and Pedro Domingos. Learning the structure of sum-product networks. In Proceedingsof the 30th International Conference on Machine Learning , pp. 873–880, 2013.Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and se-mantically consistent regions. In Proceedings of the IEEE International Conference on ComputerVision , pp. 1–8, 2009.9Under review as a conference paper at ICLR 2017D. M. Greig, B.T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binaryimages. Journal of the Royal Statistical Society. Series B (Methodological) , 51(2):271–279, 1989.John Hopcroft and Jeffrey Ullman. Introduction to Automata Theory, Languages, and Computation .Addison-Wesley, Reading MA, 1979.Daniel S. Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Nat-ural Language Processing, Computational Linguistics, and Speech Recognition . Prentice Hall,2000. ISBN 9780135041963. doi: 10.1162/089120100750105975.Vladimir Kolmogorov and Carsten Rother. Minimizing nonsubmodular functions with graph cuts -a review. IEEE transactions on pattern analysis and machine intelligence , 29(7):1274–9, 2007.ISSN 0162-8828. doi: 10.1109/TPAMI.2007.1031.Vladimir Kolmogorov and Ramin Zabih. What Energy Functions Can Be Minimized via GraphCuts? IEEE Transactions on Pattern Analysis and Machine Intelligence , 26(2):147–159, 2004.ISSN 01628828. doi: 10.1109/TPAMI.2004.1262177.Nikos Komodakis, Georgios Tziritas, and Nikos Paragios. Fast, approximately optimal solutionsfor single and dynamic MRFs. In Proceedings of the IEEE Computer Society Conference onComputer Vision and Pattern Recognition , 2007. ISBN 1424411807. doi: 10.1109/CVPR.2007.383095.Victor Lempitsky, Carsten Rother, Stefan Roth, and Andrew Blake. Fusion Moves for MarkovRandom Field Optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence ,32(8):1392–1405, 2010.Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. A Pylon Model for Semantic Segmen-tation. In Neural Information Processing Systems , number 228180, pp. 1–9, 2011.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In Proceed-ings of the 27th Conference on Uncertainty in Artificial Intelligence , pp. 337–346. AUAI Press,2011.Chris Russell, Lubor Ladick ́y, Pushmeet Kohli, and Philip H.S. Torr. Exact and Approximate In-ference in Associative Hierarchical Networks using Graph Cuts. The 26th Conference on Uncer-tainty in Artificial Intelligence , pp. 1–8, 2010.Abhishek Sharma, Oncel Tuzel, and Ming-Yu Liu. Recursive Context Propagation Network forSemantic Scene Labeling. In Advances in Neural Information Processing Systems , pp. 2447–2455, 2014.Jamie Shotton, John Winn, Carsten Rother, and Antonio Criminisi. TextonBoost: Joint Appear-ance, Shape and Conext Modeling for Muli-class object Recognition and Segmentation. Pro-ceedings European Conference on Computer Vision (ECCV) , 3951(Chapter 1):1–15, 2006. ISSN09205691.Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y . Ng. Parsing natural scenes and nat-ural language with recursive neural networks. In Proceedings of the 28th International Con-ference on Machine Learning , pp. 129–136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9.Yibiao Zhao and Song-Chun Zhu. Image Parsing via Stochastic Scene Grammar. In Advances inNeural Information Processing Systems , pp. 1–9, 2011.Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and Trendsin Computer Graphics and Vision , 2(4):259–362, 2006. ISSN 1572-2740. doi: 10.1561/0600000018.10Under review as a conference paper at ICLR 2017A P ROOFSProposition 1. The energyE(v;t1;t2;yv)of the fusion of parse trees t1;t2over regionPwith headsymbolsY1;Y2for a production v:X!Y1Y2is submodular.Proof.E(v;t1;t2)is submodular as long as 2vpq(Y1;Y2)t1pq+t2pq, which is true by construc-tion, sincevpq(yvp;yvq)cpq(ycp;ycq)forcany possible descendant of vand for all labelings.Lemma 2. Given a labeling yvwhich fuses parse trees t1;t2intotwith root production v, energyE(t;P) =E(v;t1;t2;yv), and subtree regions P1\P 2=;defined by yv, then any improvementinE(t1;P1)also improves E(t;P)by at least , regardless of any change in E(t1;PnP 1).Proof. Since the optimal fusion can be found exactly, and the energy of the current labeling yvhasimproved by , the optimal fusion will have improved by at least .Proposition 2. Letc(n)be the time complexity of computing a graph cut on npixels andjGjbe thesize of the grammar defining the SSPN, then each iteration of INFER SSPN takes timeO(jGjc(n)).Proof. Letkbe the number of productions per nonterminal symbol and Nbe the nonterminals. Foreach nonterminal, F USE is calledktimes for each region and once for the remainder of the pixels.FUSE itself has complexity O(jPj+c(jPj) =O(c(jPj))when called with region P. However, inINFER SSPN each pixel is processed only once for each symbol because no regions overlap, so theworst-case complexity occurs when each symbol has only one region, and thus the total complexityof each iteration of I NFER SSPN isO(jNjkc(n)) =O(jGjc(n)).Theorem 2. Given a parse (tree) ^tofSover the entire image with energy E(^t), each iteration ofINFER SSPN constructs a parse (tree) tofSover the entire image with energy E(t)E(^t), andsince the minimum energy of an image parse is finite, INFER SSPN will always converge.Proof. We will prove by induction that for all nodes ni2^twith corresponding subtree ^ti, regionPi, production vi:X!Y1Y2and child subtrees ^t1;^t2, thatE(ti)E(^ti)after one iteration forallti=T[vi]\Pi. Since this holds for every production of Sover the image, this proves the claim.Base case. When ^tiis the subtree with region Piand production vi:X!Ycontaining only asingle terminal child, then by definition ti=T[vi]\Pi=^tibecause terminal parses do not changegiven the same region. Thus, E(ti) =E(^ti)and the claim holds.Induction step. Letvi:X!Y1Y2be the production for a node in ^tiwith subtrees ^t1;^t2overregionsP1;P2, respectively, such that P1[P 2=PiandP1\P 2=;, and suppose that for allproductions u1jwith headY1and all productions u2kwith headY2and corresponding parse treest1j=T[u1j]\P 1andt2k=T[u2k]\P 2, respectively, that E(t1j)E(^t1j)andE(t2k)E(^t2k).Now, when F USE is called on region P1it will choose the subtrees t1j:j= arg minjE(t1j;P1),andt2k:k= arg minkE(t2k;P2)and fuse these into t0ioverP. However, from Lemma 1, weknow thatticould at the very least simply reuse the labeling yvthat partitionsPintoP1;P2andin doing so return a tree t0iwith energy E(t0i)E(^ti), because each of its subtrees over their sameregions has lower (or equal) energy to those in ^t. Finally, since t0iis computed independently of anyother trees for region Pand then placed into T[vi]as a union of other trees, then ti=T[vi]\P=t0i,and the claim follows.B A DDITIONAL EXPERIMENTAL RESULTS AND DETAILSWe compared I NFER SSPN to running -expansion on a flat pairwise MRF and to max-product be-lief propagation over a multi-level (3-D) pairwise grid MRF. Each label of the flat MRF correspondsto a possible path in the grammar from the start symbol to a production to one of its constituentsymbols, etc, until reaching a terminal. In general, the number of such paths is exponential in theheight of the grammar. The unary terms are the sum of unary terms along the path and the pairwiseterm for a pair of labels is the pairwise term of the first production at which their constituents differ.For any two labels with paths that choose a different production of the same symbol (and have thesame path from the start symbol) we assign infinite cost to enforce the restriction that an object canonly have a single production of it into constituents. Note that after convergence -expansion is11Under review as a conference paper at ICLR 2017guaranteed to be within a constant factor of the global minimum energy (Boykov et al., 2001) andthus serves as a good surrogate for the true global minimum, which is intractable to compute. Themulti-layer MRF is constructed similarly. The number of levels in the MRF is equal to the heightof the DAG corresponding to the grammar used. The labels at a particular level of the MRF areall (production, constituent) pairs that can occur at this height in the grammar. The pairwise termbetween the same pixel in two levels is 0when the parent label’s constituent equals the child label’sproduction head, and 1otherwise. Pairwise terms within a layer are defined as in the flat MRF withinfinite cost for incompatible labels (i.e., two neighboring productions of the same symbol), unlesstwo copies of that nonterminal could be produced at that level by the grammar.All experiments were run on the same computer running an Intel Core i7-5960X with 8 cores and128MB of RAM. Each algorithm was limited to a single thread.0.1 0.3 1 3 10 30 100Boundary scale factor-8-7-6-5Minimum energy×105BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor50010001500Time (s)BPα-expSSPN0.1 0.3 1 3 10 30 100Boundary scale factor0.50.60.70.80.9AccuracyBPα-expSSPNFigure 4: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying boundary strength.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).0 1 2 3 4 5 6Grammar height-8-6-4Minimum energy×105BPα-expSSPN0 1 2 3 4 5 6Grammar height10002000300040005000Time (s)BPα-expSSPN0 1 2 3 4 5 6Grammar height0.50.60.70.80.9AccuracyBPα-expSSPNFigure 5: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left). Low accuracies for grammar height 0are a result of the grammar being insufficiently expressive.1 2 3 4 5 6#productions per nonterminal-8-7-6-5Minimum energy×105BPα-expSSPN1 2 3 4 5 6#productions per nonterminal100020003000Time (s)BPα-expSSPN1 2 3 4 5 6#productions per nonterminal0.50.60.70.80.9AccuracyBPα-expSSPNFigure 6: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (meanaverage pixel accuracy) for belief propagation, -expansion, and I NFER SSPN when varying grammar height.Each data point is the average value over (the same) 10images. Missing data points indicate that an algorithmran out of memory (middle and right) or returned infinite energy (left).12
HJ6BKc6Qg
ryF7rTqgl
ICLR.cc/2017/conference/-/paper546/official/review
{"title": "Interesting problem, but technically and experimentally not solid enough", "rating": "4: Ok but not good enough - rejection", "review": "This paper proposes to use a linear classifier as the probe for the informativeness of the hidden activations from different neural network layers. The training of the linear classifier does not affect the training of the neural network. \n\nThe paper is well motivated for investigating how much useful information (or how good the representations are) for each layer. The observations in this paper agrees with existing insights, such as, 1) (Fig 5a) too many random layers are harmful. 2) (Fig 5b) training is helpful. 3) (Fig 7) lower layers converge faster than higher layer. 4) (Fig 8) too deep network is hard to train, and skip link can remedy this problem.\n\nHowever, this paper has following problems:\n\n1. It is not sufficiently justified why the linear classifier is a good probe. It is not crystal clear why good intermediate features need to show high linear classification accuracy. More theoretical analysis and/or intuition will be helpful. \n2. This paper does not provide much insight on how to design better networks based on the observations. Designing a better network is also the best way to justify the usefulness of the analysis.\n\nOverall, this paper is tackling an interesting problem, but the technique (the linear classifier as the probe) is not novel and more importantly need to be better justified. Moreover, it is important to show how to design better neural networks using the observations in this paper.\n \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Understanding intermediate layers using linear classifier probes
["Guillaume Alain", "Yoshua Bengio"]
Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.
["intermediate layers", "models", "probes", "model", "linear classifier probes", "reputation", "black boxes", "new", "better"]
https://openreview.net/forum?id=ryF7rTqgl
https://openreview.net/pdf?id=ryF7rTqgl
https://openreview.net/forum?id=ryF7rTqgl&noteId=HJ6BKc6Qg
Under review as a conference paper at ICLR 2017UNDERSTANDING INTERMEDIATE LAYERSUSING LINEAR CLASSIFIER PROBESGuillaume Alain & Yoshua BengioDepartment of Computer Science and Operations ResearchUniversit ́e de Montr ́ealMontreal, QC. H3C 3J7guillaume.alain.umontreal@gmail.comABSTRACTNeural network models have a reputation for being black boxes. We proposea new method to better understand the roles and dynamics of the intermediatelayers. This has direct consequences on the design of such models and it enablesthe expert to be able to justify certain heuristics (such as adding auxiliary losses inmiddle layers). Our method uses linear classifiers, referred to as “probes”, where aprobe can only use the hidden units of a given intermediate layer as discriminatingfeatures. Moreover, these probes cannot affect the training phase of a model, andthey are generally added after training. They allow the user to visualize the stateof the model at multiple steps of training. We demonstrate how this can be usedto develop a better intuition about models and to diagnose potential problems.1 I NTRODUCTIONThe recent history of deep neural networks features an impressive number of new methods andtechnological improvements to allow the training of deeper and more powerful networks.Despite this, models still have a reputation for being black boxes. Neural networks are criticized fortheir lack of interpretability, which is a tradeoff that we accept because of their amazing performanceon many tasks. Efforts have been made to identify the role played by each layer, but it can be hardto find a meaning to individual layers.There are good arguments to support the claim that the first layers of a convolution network forimage recognition contain filters that are relatively “general”, in the sense that they would workgreat even if we switched to an entirely different dataset of images. The last layers are specific tothe dataset being used, and have to be retrained when using a different dataset. In Yosinski et al.(2014) the authors try to pinpoint the layer at which this transition occurs, but they show that theexact transition is spread across multiple layers.In this paper, we introduce the concept of linear classifier probe , referred to as a “probe” for shortwhen the context is clear. We start from the concept of Shannon entropy , which is the classic way todescribe the information contents of a random variable. We then seek to apply that concept to un-derstand the roles of the intermediate layers of a neural network, to measure how much informationis gained at every layer (answer : technically, none). We argue that it fails to apply, and so wepropose an alternative framework to ask the same question again. This time around, we ask whatwould be the performance of an optimal linear classifier if it was trained on the inputs of a givenlayer from our model. We demonstrate how this powerful concept can be very useful to understandthe dynamics involved in a deep neural network during training and after.2 I NFORMATION THEORYIt was a great discovery when Claude Shannon repurposed the notion of entropy to represent infor-mation contents in a formal way. It laid the foundations for the discipline of information theory. Wewould refer the reader to first chapters of MacKay (2003) for a good exposition on the matter.1Under review as a conference paper at ICLR 2017Naturally, we would like to ask some questions about the information contents of the many layersof convolutional neural networks.What happens when we add more layers?Where does information flow in a neural network with multiple branches?Does having multiple auxiliary losses help? (e.g. Inception model)Intuitively, for a training sample xiwith its associated label yi, a deep model is getting closer to thecorrect answer in the higher layers. It starts with the difficult job of classifying xi, which becomeseasier as the higher layers distill xiinto a representation that is easier to classify. One might betempted to say that this means that the higher layers have more information about the ground truth,but this would be incorrect.Here there is a mismatch between two different concepts of information. The notion of entropy failsto capture the essence of those questions. This is illustrated in a formal way by the Data ProcessingInequality . It states that, for a set of three random variables satisfying the dependencyX!Y!Zthen we have thatI(X;Z)I(X;Y)whereI(X;Y )is the mutual information.Intuitively, this means that the deterministic transformations performed by the many layers of adeep neural network are not adding more information. In the best case, they preserve informationand affect only the representation. But in almost all situations, they lose some information in theprocess.If we distill this further, we can think of the serious mismatch between the two following ideas :Part of the genius of the notion of entropy is that is distills the essence of information to aquantity that does not depend on the particular representation.A deep neural network is a series of simple deterministic transformations that affect therepresentation so that the final layer can be fed to a linear classifier.The former ignores the representation of data, while the latter is an expert in finding good represen-tations. A deaf painter is working on a visual masterpiece to offer to a blind musician who playsmusic for him.We need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitivenotion of information. The role of data representation is important, but we would also argue that wehave to think about this issue as it relates to computational complexity. A linear classifier is basicallythe simplest form of classifier that is neither trivial nor degenerate.We define a new notion of information that depends on our ability to classify features of a givenlayer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and toget potentially interesting answers.We end this section with a conceptual example in Figure 1. If Xcontains an image of the savannah,andY2f0;1grefers to whether it contains a lion or not, then none of the subsequent layers aretruly more informative than Xitself. The raw bits from the picture file contain everything.3 L INEAR CLASSIFIER PROBESIn section 3.1 we present the main concept of this paper. We illustrate the concept in section 3.3.We then present a basic experiment in section 3.4. In section 3.6 we modify a very deep networkin two different ways and we show how probes allow us to visualize the consequences (sometimesdisastrous) of our design choices.2Under review as a conference paper at ICLR 2017(a) hex dump of picture of a lion(b) same lion in human-readable formatFigure 1: The hex dump represented on the left has more information contents than the imageon the right. Only one of them can be processed by the human brain in time to save their lives.Computational convenience matters. Not just entropy.3.1 P ROBESAs we discussed the previous section, there is indeed a good reason to use many deterministic layers,and it is because they perform useful transformations to the data with the goal of ultimately fitting alinear classifier at the very end . That is the purpose of the many layers. They are a tool to transformdata into a form to be fed to a boring linear classifier.With this in mind, it is natural to ask if that transformation is sudden or progressive, and whether theintermediate layers already have a representation that is immediately useful to a linear classifier. Werefer the reader to Figure 2 for a diagram of probes being inserted in the usual deep neural network.X H0 H1 HK ŶŶ-1 Ŷ0 Ŷ1 ŶKFigure 2: Probes being added to every layer of a model. These additional probes are not supposedto change the training of the model, so we add a little diode symbol through the arrows to indicatethat the gradients will not backpropagate through those connections.The conceptual framework that we propose is one where the intuitive notion of information is equiv-alent with immediate suitability for a linear classifier (instead of being related to entropy).Just to be absolutely clear about what we call a linear classifier , we mean a functionf:H![0;1]Dh7!softmax (Wh+b):whereh2Hare the features of some hidden layer, [0;1]Dis the space of one-hot encodings of theDtarget classes, and (W;b)are the probe weights and biases to be learned so as to minimize theusual cross-entropy loss.Over the course of training a model, the parameters of the model change. However, probes onlymake sense when we refer to a given training step. We can talk about the probes at iteration noftraining, when the model parameters are n.These parameters are not affected by the probes.We prevent backpropagation through the model either by stopping the gradient flow (done withtf.stop gradient in tensorflow), or simply by specifying that the only variables to be updatedare the probe parameters, while we keep nfrozen.3Under review as a conference paper at ICLR 20173.1.1 T RAINING THE PROBESFor the purposes of this paper, we train the probes up to convergence with fixed model parameters,and we report the prediction error on the training set.It is absolutely possible to train the probes simulatenously while training the model itself. This is agood approach if we consider about how long it can take to train the model. However, this createsa potential problem if we optimize the loss of the model more quickly than the loss of the probes.This can present a skewed view of the actual situation that we would have if we trained the probesuntil convergence before updating the model parameters. If we accept this trade off, then we cantrain the probes at the same time as the model.In some situations, the probes might overfit the training set, so we may want to do early stopping onthe validation set and report the performance for the probes on the test set. This is what we do insection 3.4 with the simple MNIST convnet.We are still unsure if one of those variations should be preferred in general, and right now they allseem acceptable so long as we interpret the probe measurements properly.Note that training those probes represents a convex optimization problem. In practice, this doesmean guarantee that they are easy to train. However, it is reassuring because it means that probestaken at time ncan be used as initialization for probes at time n+1.We use cross-entropy as probe loss because all models studied here used cross-entropy. Other alter-native losses could be justified in other settings.3.2 P ROBES ON BIFURCATING TOY MODELHere we show a hypothetical example in which a model contains a bifurcation with two paths thatlater recombine. We are interested in knowing whether those two branches are useful, or whetherone is potentially redundant or useless.Xconcatconcatprobe prediction error0.750.600.45ŶFor example, the two different branches might contain convolutional layers with different dimen-sions. They may have a different number of sublayers, or one might represent a skip connection.We assume that the branches are combined through concatenation of their features, so that nothingis lost.For this hypothetical situation, we indicate the probe prediction errors on the graphical model. Theupper path has a prediction error of 0:75, the lower path has 0:60, and their combination has 0:45.Small errors are preferred. Although the upper path has “less information” than the lower path, wecan see here that it is not redundant information, because when we concatenate the features of thetwo branches we get a prediction error of 0:45<0:60.If the concatenated layer had a prediction error of 0:60instead of 0:45, then we could declare thatthe above branch did nothing useful. It may have nonzero weights, but it’s still useless.Naturally, this kind of conclusion might be entirely wrong. It might be the case that the branchabove contains very meaningful features, and they simply happen to be useless to a linear classifierapplied right there. The idea of using linear classification probes to understand the roles of differentbranches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimizedperfectly, the conclusions drawn can be misleading.Note that we are reporting here the prediction errors, and it might be the case that the loss is indeedlower when we concatenate the two branches, but for some reason it could fail to apply to theprediction error.4Under review as a conference paper at ICLR 20170 5 10 15 20 25 30 35linear probe at layer k0.00.10.20.30.40.5optimal prediction errorFigure 3: Toy experiment described in section 3.3,with linearly separable data (two labels), an un-trained MLP with 32 layers, and probes at ev-ery layer. We report the prediction error for ev-ery probe, where 0:50would be the performenceof a coin flip and 0:00would be ideal. Note thatthe layer 0here corresponds to the raw data, andthe probes are indeed able to classify it perfectly.As expected, performance degrades when apply-ing random transformations. If many more layerswere present, it would be hard to imagine how thefinal layer (with the model loss) can get any usefulsignal to backpropagate.3.3 P ROBES ON UNTRAINED MODELWe start with a toy example to illustrate what kind of plots we expect from probes. We use a 32-layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU( 0:5)as activation function.We will run the same experiment 100times, with a different toy dataset each time. The goal is to usea data distribution (X;Y )whereX2R128is drawnN(0;I)and whereY2f 1;1gin linearlyseparable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick aw2R128for each experiment, and let the label ynbe the sign of xTnw.We initialize this 32-layer MLP using glorot normal initialization, we do not perform any trainingon the model, and we add one probe at every layer. We optimize the probes with RMSProp and asufficiently small learning rate.In Figure 3, we show the prediction error rate for every probe, averaged over the 100experiments.The graph includes a probe applied directly on the inputs X, where we naturally have an error ratethat is essentially zero (to be expected by the way we constructed our data), and which serves as akind of sanity check. Given that we have only two possible labels, we also show a dotted horizontalline at 0:50, which is essentially the prediction error that we would get by flipping a coin. We cansee that the prediction error rate climbs up towards 0:50as we go deeper in the MLP (with untrainedparameters).This illustrates the idea that the input signal is getting mangled by the successive layers, so muchthat it becomes rather useless by the time we reach the final layer. We checked the mean activationnorm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for thedegradation. Note that this situation could be avoided by using orthogonal weights.One of the popular explanation for training difficulties in very deep models is that of the explod-ing/vanishing (Hochreiter, 1991; Bengio et al., 1993). Here we would like to offer another comple-mentary explanation, based on the observations from Figure 3. That is, at the beginning of training,the usefulness of layers decays as we go deeper, reaching the point where the deeper layers areutterly useless. The values contained in the last layer are then used in the final softmax classifier,and the loss backpropagates the values of the derivatives. Since that derivative is based on garbageactivations, the backpropagated quantities are also garbage, which means that the weights are allgoing to be updated based on garbage. The weights stay bad, and we fail to train the model. Theauthors like to refer to that phenomenon as garbage forwardprop, garbage backprop , in reference tothe popular concept of garbage in, garbage out in computer science.3.4 P ROBES ON MNIST CONVNETIn this section we run the MNIST convolutional model provided by the tensorflow github repo(tensorflow/models/image/mnist/convolutional.py ) We selected that model forreproducibility and to demonstrate how to easily peek into popular models by using probes.5Under review as a conference paper at ICLR 2017We start by sketching the model in Figure 4. We report the results at the beginning and the end oftraining on Figure 5. One of the interesting dynamics to be observed there is how useful the firstlayers are, despite the fact that the model is completely untrained. Random projections can be usefulto classify data, and this has been studied by others (Jarrett et al., 2009).inputimagesconv 5x532 filtersReLUmaxpool2x2conv 5x564 filtersReLUmaxpool2x2matmul ReLU matmuloutputlogitsconvolution layer convolution layer fully-connected layer fully-connected layerFigure 4: This graphical model represents the neural network that we are going to use for MNIST.The model could be written in a more compact form, but we represent it this way to expose all thelocations where we are going to insert probes. The model itself is simply two convolutional layersfollowed by two fully-connected layer (one being the final classifier). However, we insert probes oneach side of each convolution, activation function, and pooling function. This is a bit overzealous,but the small size of the model makes this relatively easy to do.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error(a) After initialization, no training.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error (b) After training for 10 epochs.Figure 5: We represent here the test prediction error for each probe, at the beginning and at theend of training. This measurement was obtained through early stopping based on a validation set of104elements. The probes are prevented from overfitting the training data. We can see that, at thebeginning of training (on the left), the randomly-initialized layers were still providing useful trans-formations. The test prediction error goes from 8% to 2% simply using those random features. Thebiggest impact comes from the first ReLU. At the end of training (on the right), the test predictionerror is improving at every layer (with the exception of a minor kink on fc1preact ).3.5 P ROBES ON INCEPTION V3We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedyet al., 2015; Russakovsky et al., 2015). This is very similar to what is presented in section 3.4, buton a much larger scale. Due to the challenge presented by this experiment, we were not able to doeverything that we had hoped. We have chosen to put those results in the appendix section A.2.Certain layers of the Inception v3 model have approximately one million features. With 1000classes, this means that some probes can take even more storage space than the whole model it-self. In these cases, one of the creative solutions was to try to use only a random subset of thefeatures. This is discussed in the appendix section A.1.3.6 A UXILIARY LOSS BRANCHES AND SKIP CONNECTIONSHere we investigate two ways to modify a deep model in order to facilitate training. Our goal is notto convince the reader that they should implement these suggestions in their own models. Rather,we want to demonstrate the usefulness of the linear classifier probes as a way to better understandwhat is happening in their deep networks.6Under review as a conference paper at ICLR 2017In both cases we use a toy model with 128 fully-connected layers with 128 hidden units in eachlayer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs.We choose this model because we wanted a pathologically deep model without getting involvedin architecture details. The model is pathological in the sense that smaller models can easily bedesigned to achieve better performance, but also in the sense that the model is so deep that it is veryhard to train it with gradient descent methods. From our experiments, the maximal depth wherethings start to break down was depth 64, hence the choice here of using depth 128.In the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to theloss minimization. They are not probes. This is very similar to what happens in the famous Inceptionmodel where “auxiliary heads” are used (Szegedy et al., 2015). This is illustrated in Figure 6a, andit works nicely. The untrainable model is now made trainable through a judicious use of auxiliaryclassifier losses. The results are shown in Figure 7.In the second scenario, we look at adding a bridge (a skip connection) between layer 0 and layer 64.This means that the input features to layer 64 are obtained by concatenating the output of layer 63with the features of layer 0. The idea here is that we might observe that the model would effectivelytrain a submodel of depth 64, using the skip connection, and shift gears later to use the whole depthof 128 layers. This is illustrated in Figure 6b, and the results are shown in Figure 8. It does not workas expected, but the failure of this approach is visualized very nicely with probes and serves as agreat example of their usefulness in diagnosing problems with models.In both cases, there are two interesting observations that can be made with probes. We refer readerstohttps://youtu.be/x8j4ZHCR2FI for the full videos associated to Figures 5, 7 and 8.Firstly, at the beginning of training, we can see how the raw data is directly useful to perform linearclassification, and how this degrades as more layers are added. In the case of the skip connection inFigure 8, this has the effect of creating two bumps. This is because the layer 64 also has the inputdata as direct parent, so it can fit a probe to that signal.Secondly, the prediction error goes down in all probes during training, but it does so in a way thatstarts with the parents before it spreads to their descendants. This is even more apparent on the fullvideo (instead of the 3 frames provided here). This is a ripple effect, where the prediction error inFigure 6b is visually spreading like a wave from the left of the plot to the right.X H0 H1 ŶŶ-1 Ŷ0 Ŷ1H2 H3Ŷ2 Ŷ3L3H4 H5Ŷ4 Ŷ5H6 H7Ŷ6 Ŷ7L7H8 H9Ŷ8 Ŷ9H10 H11Ŷ10 Ŷ11L11H12 H13Ŷ12 Ŷ13H14 H15Ŷ14 Ŷ15L15(a) Model with 16 layers, one guide at every 4 layers.X H0 H1 H127 ŶŶ-1 Ŷ0 Ŷ1 Ŷ127H64Ŷ64(b) Model with 128 layers. A skip connec-tion goes from the beginning straight to themiddle of the graph.Figure 6: Examples of deep neural network with one probe at every layer (drawn above the graph).We show here the addition of extra components to help training (under the graph, in orange).4 D ISCUSSION AND FUTURE WORKWe have presented more toy models or simple models instead of larger models such as Inceptionv3. In the appendix section A.2 we show an experiment on Inception v3, which proved to be morechallenging than expected. Future work in this domain would involve performing better experimentson a larger scale than small MNIST convnets, but still within a manageable size so we can properlytrain all the probes. This would allow us to produce nice videos showing many training steps insequence.We have received many comments from people who thought about using multi-layer probes. Thiscan be seen as a natural extension of the linear classifier probes. One downside to this idea is that welose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of7Under review as a conference paper at ICLR 2017(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 5000 minibatchesFigure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16layers (refer to simplified sketch in Figure 6a if needed). This loss is added to the usual modelloss at the last layer. We fit a probe at every layer to see how well each layer would perform if itsvalues were used as a linear classifier. We plot the train prediction error associated to all the probes,at three different steps. Before adding those auxiliary losses, the model could not successfully betrained through usual gradient descent methods, but with the addition of those intermediate losses,the model is “guided” to achieve certain partial objectives. This leads to a successful training ofthe complete model. The final prediction error is not impressive, but the model was not designed toachieve state-of-the-art performance.(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 2000 minibatchesFigure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer64 (refer to sketch in Figure 6b if needed). We fit a probe at every layer to see how well eachlayer would perform if its values were used as a linear classifier. We plot the train prediction errorassociated to all the probes, at three different steps. We can see how the model completely ignoreslayers 1-63, even when we train it for a long time. The use of probes allows us to diagnose thatproblem through visual inspection.now we feel that it is premature to start using multi-layer probes. This also leads to the convolutedidea of having a regular probe inside a multi-layer probe.5 C ONCLUSIONIn this paper we introduced the concept of the linear classifier probe as a conceptual tool to betterunderstand the dynamics inside a neural network and the role played by the individual intermediatelayers. We are now able to ask new questions and explore new areas. We have demonstrated howthese probes can be used to identify certain problematic behaviors in models that might not beapparent when we traditionally have access to only the prediction loss and error.We hope that the notions presented in this paper can contribute to the understanding of deep neuralnetworks and guide the intuition of researchers that design them.ACKNOWLEDGMENTSYoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of thefollowing agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu ́ebec,Compute Canada, the Canada Research Chairs and CIFAR.8Under review as a conference paper at ICLR 2017REFERENCESY Bengio, Paolo Frasconi, and P Simard. The problem of learning long-term dependencies inrecurrent networks. In Neural Networks, 1993., IEEE International Conference on , pp. 1183–1188. IEEE, 1993.Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Uni-versit ̈at M ̈unchen , pp. 91, 1991.Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecturefor object recognition? In 2009 IEEE 12th International Conference on Computer Vision , pp.2146–2153. IEEE, 2009.David MacKay. Information Theory, Inference and Learning Algorithms . Cambridge UniversityPress, 2003.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deepneural networks? In Advances in neural information processing systems , pp. 3320–3328, 2014.A A PPENDIXA.1 P ROPOSAL : TRAIN PROBES USING ONLY SUBSETS OF FEATURESOne of the challenges to train on the Inception v3 model is that many of the layers have more than200;000features. This is even worse in the first convolution layers before the pooling operations,where we have around a million features. With 1000 output classes, a probe using 200;000featureshas a weight matrix taking almost 1GB of storage.When using stochastic gradient descent, we require space to store the gradients, and if we use mo-mentum this ends up taking three times the memory on the GPU. This is even worse for RMSProp.Normally this might be acceptable for a model of reasonable size, but this turns into almost 4GBoverhead per probe .We do not have to put a probe at every layer. We can also train probes independently. We can putprobe parameters on the CPU instead of the GPU, if necessary. But when the act of training probesincreases the complexity of the experiment beyond a certain point, the researcher might decide thatthey are not worth the trouble.We propose the following solution : for a given probe, use a fixed random subset of features insteadof the whole set of features.With certain assumptions about the independence of the features and their shared role in predictingthe correct class, we can make certain claims about how few features are actually required to assessthe prediction error of a probe. We thank Yaroslav Bulatov for suggesting this approach.We ran an experiment in which we used data XN(0;ID)whereD= 100;000is the number offeatures. We used K= 1000 classes and we generated the ground truth using a matrix Wof shape(D;K ). To obtain the class of a given x, we simply multiply xTWand take the argmax over the Kcomponents of the result.xN(0;ID)y= arg maxk=1::KxTW[:;k]We selected a matrix Wby drawing all its individual coefficients from a univariate gaussian.9Under review as a conference paper at ICLR 2017Instead of using D= 100;000features, we used instead only 1000 features picked at random. Wetrained a linear classifier on those features and, experimentally, it was relatively easy to achieve a4% error rate on our first try. With all the features, we could achieve a 0% error rate, so 4 % mightnot look great. We have to keep in mind that we have K= 1000 classes so random guesses yield anerror rate of 99.9%.This can reduce the storage cost for a probe from 1GB down to 10MB. The former is hard to justify,and the latter is almost negligible.A.2 P ROBES ON INCEPTION V3We are interested in putting linear classifier probes in the popular Inception v3 model,training on the ImageNet dataset. We used the tensorflow implementation available online(tensorflow/models/inception/inception ) and ran it on one GPU for 2 weeks.As described in section A.1, one of the challenges is that the number of features can be prohibitivelylarge, and we have to consider taking only a subset of the features. In this particular experiment, wehave had the most success by taking 1000 random features for each probe. This gives certain layersan unfair advantage if they start with 4000 features and we kept 1000 , whereas in other cases theprobe insertion point has 426;320features and we keep 1000 . There was no simple “fair” solution.That being said, 13 out of the 17 probes have more than 100;000features, and 11 of those probeshave more than 200;000features, so things were relatively comparable.We put linear classifier probes at certain strategic layers. We represent this using boxes in thefollowing Figure 9. The prediction error of the probe given by the last layer of each box is illustratedby coloring the box. Red is bad (high prediction error) and green/blue is good (low prediction error).We would have liked to have a video to show the evolution of this during training, but this experimenthad to be scaled back due to the large computational demands. We show here the prediction errorsat three moments of training. These correspond roughly to the beginning of training, then after afew days, and finally after a week.10Under review as a conference paper at ICLR 2017Inception v3auxiliary headmain headminibatches0015150.0 1.0probe training prediction errorauxiliary headmain headminibatches050389auxiliary headmain headminibatches100876auxiliary headmain headminibatches308230Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on theImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000features. As expected, at first all the probes have a 100% prediction error, but as training progresseswe see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50%is much better than a random guess. The auxiliary head, shown under the model, was observed tohave a prediction error that was slightly better than the main head. This is not necessarily a conditionthat will hold at the end of training, but merely an observation.11
HJo4rU7Ng
ryF7rTqgl
ICLR.cc/2017/conference/-/paper546/official/review
{"title": "Final review", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a method that attempts to \"understand\" what is happening within a neural network by using linear classifier probes which are inserted at various levels of the network.\n\nI think the idea is nice overall because it allows network designers to better understand the representational power of each layer in the network, but at the same time, this works feels a bit rushed. In particular, the fact that the authors did not provide any results in \"real\" networks, which are used to win competitions makes the results less strong, since researchers who want to created competitive network architectures don't have enough evidence from this work to decides whether they should use it or not.\n\nIdeally, I would encourage the authors to consider continuing this line of research and show how to use the information given by these linear classifiers to construct better network architectures. \n\nUnfortunately, as is, I don't think we have enough novelty to justify accepting this work in the conference. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Understanding intermediate layers using linear classifier probes
["Guillaume Alain", "Yoshua Bengio"]
Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.
["intermediate layers", "models", "probes", "model", "linear classifier probes", "reputation", "black boxes", "new", "better"]
https://openreview.net/forum?id=ryF7rTqgl
https://openreview.net/pdf?id=ryF7rTqgl
https://openreview.net/forum?id=ryF7rTqgl&noteId=HJo4rU7Ng
Under review as a conference paper at ICLR 2017UNDERSTANDING INTERMEDIATE LAYERSUSING LINEAR CLASSIFIER PROBESGuillaume Alain & Yoshua BengioDepartment of Computer Science and Operations ResearchUniversit ́e de Montr ́ealMontreal, QC. H3C 3J7guillaume.alain.umontreal@gmail.comABSTRACTNeural network models have a reputation for being black boxes. We proposea new method to better understand the roles and dynamics of the intermediatelayers. This has direct consequences on the design of such models and it enablesthe expert to be able to justify certain heuristics (such as adding auxiliary losses inmiddle layers). Our method uses linear classifiers, referred to as “probes”, where aprobe can only use the hidden units of a given intermediate layer as discriminatingfeatures. Moreover, these probes cannot affect the training phase of a model, andthey are generally added after training. They allow the user to visualize the stateof the model at multiple steps of training. We demonstrate how this can be usedto develop a better intuition about models and to diagnose potential problems.1 I NTRODUCTIONThe recent history of deep neural networks features an impressive number of new methods andtechnological improvements to allow the training of deeper and more powerful networks.Despite this, models still have a reputation for being black boxes. Neural networks are criticized fortheir lack of interpretability, which is a tradeoff that we accept because of their amazing performanceon many tasks. Efforts have been made to identify the role played by each layer, but it can be hardto find a meaning to individual layers.There are good arguments to support the claim that the first layers of a convolution network forimage recognition contain filters that are relatively “general”, in the sense that they would workgreat even if we switched to an entirely different dataset of images. The last layers are specific tothe dataset being used, and have to be retrained when using a different dataset. In Yosinski et al.(2014) the authors try to pinpoint the layer at which this transition occurs, but they show that theexact transition is spread across multiple layers.In this paper, we introduce the concept of linear classifier probe , referred to as a “probe” for shortwhen the context is clear. We start from the concept of Shannon entropy , which is the classic way todescribe the information contents of a random variable. We then seek to apply that concept to un-derstand the roles of the intermediate layers of a neural network, to measure how much informationis gained at every layer (answer : technically, none). We argue that it fails to apply, and so wepropose an alternative framework to ask the same question again. This time around, we ask whatwould be the performance of an optimal linear classifier if it was trained on the inputs of a givenlayer from our model. We demonstrate how this powerful concept can be very useful to understandthe dynamics involved in a deep neural network during training and after.2 I NFORMATION THEORYIt was a great discovery when Claude Shannon repurposed the notion of entropy to represent infor-mation contents in a formal way. It laid the foundations for the discipline of information theory. Wewould refer the reader to first chapters of MacKay (2003) for a good exposition on the matter.1Under review as a conference paper at ICLR 2017Naturally, we would like to ask some questions about the information contents of the many layersof convolutional neural networks.What happens when we add more layers?Where does information flow in a neural network with multiple branches?Does having multiple auxiliary losses help? (e.g. Inception model)Intuitively, for a training sample xiwith its associated label yi, a deep model is getting closer to thecorrect answer in the higher layers. It starts with the difficult job of classifying xi, which becomeseasier as the higher layers distill xiinto a representation that is easier to classify. One might betempted to say that this means that the higher layers have more information about the ground truth,but this would be incorrect.Here there is a mismatch between two different concepts of information. The notion of entropy failsto capture the essence of those questions. This is illustrated in a formal way by the Data ProcessingInequality . It states that, for a set of three random variables satisfying the dependencyX!Y!Zthen we have thatI(X;Z)I(X;Y)whereI(X;Y )is the mutual information.Intuitively, this means that the deterministic transformations performed by the many layers of adeep neural network are not adding more information. In the best case, they preserve informationand affect only the representation. But in almost all situations, they lose some information in theprocess.If we distill this further, we can think of the serious mismatch between the two following ideas :Part of the genius of the notion of entropy is that is distills the essence of information to aquantity that does not depend on the particular representation.A deep neural network is a series of simple deterministic transformations that affect therepresentation so that the final layer can be fed to a linear classifier.The former ignores the representation of data, while the latter is an expert in finding good represen-tations. A deaf painter is working on a visual masterpiece to offer to a blind musician who playsmusic for him.We need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitivenotion of information. The role of data representation is important, but we would also argue that wehave to think about this issue as it relates to computational complexity. A linear classifier is basicallythe simplest form of classifier that is neither trivial nor degenerate.We define a new notion of information that depends on our ability to classify features of a givenlayer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and toget potentially interesting answers.We end this section with a conceptual example in Figure 1. If Xcontains an image of the savannah,andY2f0;1grefers to whether it contains a lion or not, then none of the subsequent layers aretruly more informative than Xitself. The raw bits from the picture file contain everything.3 L INEAR CLASSIFIER PROBESIn section 3.1 we present the main concept of this paper. We illustrate the concept in section 3.3.We then present a basic experiment in section 3.4. In section 3.6 we modify a very deep networkin two different ways and we show how probes allow us to visualize the consequences (sometimesdisastrous) of our design choices.2Under review as a conference paper at ICLR 2017(a) hex dump of picture of a lion(b) same lion in human-readable formatFigure 1: The hex dump represented on the left has more information contents than the imageon the right. Only one of them can be processed by the human brain in time to save their lives.Computational convenience matters. Not just entropy.3.1 P ROBESAs we discussed the previous section, there is indeed a good reason to use many deterministic layers,and it is because they perform useful transformations to the data with the goal of ultimately fitting alinear classifier at the very end . That is the purpose of the many layers. They are a tool to transformdata into a form to be fed to a boring linear classifier.With this in mind, it is natural to ask if that transformation is sudden or progressive, and whether theintermediate layers already have a representation that is immediately useful to a linear classifier. Werefer the reader to Figure 2 for a diagram of probes being inserted in the usual deep neural network.X H0 H1 HK ŶŶ-1 Ŷ0 Ŷ1 ŶKFigure 2: Probes being added to every layer of a model. These additional probes are not supposedto change the training of the model, so we add a little diode symbol through the arrows to indicatethat the gradients will not backpropagate through those connections.The conceptual framework that we propose is one where the intuitive notion of information is equiv-alent with immediate suitability for a linear classifier (instead of being related to entropy).Just to be absolutely clear about what we call a linear classifier , we mean a functionf:H![0;1]Dh7!softmax (Wh+b):whereh2Hare the features of some hidden layer, [0;1]Dis the space of one-hot encodings of theDtarget classes, and (W;b)are the probe weights and biases to be learned so as to minimize theusual cross-entropy loss.Over the course of training a model, the parameters of the model change. However, probes onlymake sense when we refer to a given training step. We can talk about the probes at iteration noftraining, when the model parameters are n.These parameters are not affected by the probes.We prevent backpropagation through the model either by stopping the gradient flow (done withtf.stop gradient in tensorflow), or simply by specifying that the only variables to be updatedare the probe parameters, while we keep nfrozen.3Under review as a conference paper at ICLR 20173.1.1 T RAINING THE PROBESFor the purposes of this paper, we train the probes up to convergence with fixed model parameters,and we report the prediction error on the training set.It is absolutely possible to train the probes simulatenously while training the model itself. This is agood approach if we consider about how long it can take to train the model. However, this createsa potential problem if we optimize the loss of the model more quickly than the loss of the probes.This can present a skewed view of the actual situation that we would have if we trained the probesuntil convergence before updating the model parameters. If we accept this trade off, then we cantrain the probes at the same time as the model.In some situations, the probes might overfit the training set, so we may want to do early stopping onthe validation set and report the performance for the probes on the test set. This is what we do insection 3.4 with the simple MNIST convnet.We are still unsure if one of those variations should be preferred in general, and right now they allseem acceptable so long as we interpret the probe measurements properly.Note that training those probes represents a convex optimization problem. In practice, this doesmean guarantee that they are easy to train. However, it is reassuring because it means that probestaken at time ncan be used as initialization for probes at time n+1.We use cross-entropy as probe loss because all models studied here used cross-entropy. Other alter-native losses could be justified in other settings.3.2 P ROBES ON BIFURCATING TOY MODELHere we show a hypothetical example in which a model contains a bifurcation with two paths thatlater recombine. We are interested in knowing whether those two branches are useful, or whetherone is potentially redundant or useless.Xconcatconcatprobe prediction error0.750.600.45ŶFor example, the two different branches might contain convolutional layers with different dimen-sions. They may have a different number of sublayers, or one might represent a skip connection.We assume that the branches are combined through concatenation of their features, so that nothingis lost.For this hypothetical situation, we indicate the probe prediction errors on the graphical model. Theupper path has a prediction error of 0:75, the lower path has 0:60, and their combination has 0:45.Small errors are preferred. Although the upper path has “less information” than the lower path, wecan see here that it is not redundant information, because when we concatenate the features of thetwo branches we get a prediction error of 0:45<0:60.If the concatenated layer had a prediction error of 0:60instead of 0:45, then we could declare thatthe above branch did nothing useful. It may have nonzero weights, but it’s still useless.Naturally, this kind of conclusion might be entirely wrong. It might be the case that the branchabove contains very meaningful features, and they simply happen to be useless to a linear classifierapplied right there. The idea of using linear classification probes to understand the roles of differentbranches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimizedperfectly, the conclusions drawn can be misleading.Note that we are reporting here the prediction errors, and it might be the case that the loss is indeedlower when we concatenate the two branches, but for some reason it could fail to apply to theprediction error.4Under review as a conference paper at ICLR 20170 5 10 15 20 25 30 35linear probe at layer k0.00.10.20.30.40.5optimal prediction errorFigure 3: Toy experiment described in section 3.3,with linearly separable data (two labels), an un-trained MLP with 32 layers, and probes at ev-ery layer. We report the prediction error for ev-ery probe, where 0:50would be the performenceof a coin flip and 0:00would be ideal. Note thatthe layer 0here corresponds to the raw data, andthe probes are indeed able to classify it perfectly.As expected, performance degrades when apply-ing random transformations. If many more layerswere present, it would be hard to imagine how thefinal layer (with the model loss) can get any usefulsignal to backpropagate.3.3 P ROBES ON UNTRAINED MODELWe start with a toy example to illustrate what kind of plots we expect from probes. We use a 32-layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU( 0:5)as activation function.We will run the same experiment 100times, with a different toy dataset each time. The goal is to usea data distribution (X;Y )whereX2R128is drawnN(0;I)and whereY2f 1;1gin linearlyseparable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick aw2R128for each experiment, and let the label ynbe the sign of xTnw.We initialize this 32-layer MLP using glorot normal initialization, we do not perform any trainingon the model, and we add one probe at every layer. We optimize the probes with RMSProp and asufficiently small learning rate.In Figure 3, we show the prediction error rate for every probe, averaged over the 100experiments.The graph includes a probe applied directly on the inputs X, where we naturally have an error ratethat is essentially zero (to be expected by the way we constructed our data), and which serves as akind of sanity check. Given that we have only two possible labels, we also show a dotted horizontalline at 0:50, which is essentially the prediction error that we would get by flipping a coin. We cansee that the prediction error rate climbs up towards 0:50as we go deeper in the MLP (with untrainedparameters).This illustrates the idea that the input signal is getting mangled by the successive layers, so muchthat it becomes rather useless by the time we reach the final layer. We checked the mean activationnorm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for thedegradation. Note that this situation could be avoided by using orthogonal weights.One of the popular explanation for training difficulties in very deep models is that of the explod-ing/vanishing (Hochreiter, 1991; Bengio et al., 1993). Here we would like to offer another comple-mentary explanation, based on the observations from Figure 3. That is, at the beginning of training,the usefulness of layers decays as we go deeper, reaching the point where the deeper layers areutterly useless. The values contained in the last layer are then used in the final softmax classifier,and the loss backpropagates the values of the derivatives. Since that derivative is based on garbageactivations, the backpropagated quantities are also garbage, which means that the weights are allgoing to be updated based on garbage. The weights stay bad, and we fail to train the model. Theauthors like to refer to that phenomenon as garbage forwardprop, garbage backprop , in reference tothe popular concept of garbage in, garbage out in computer science.3.4 P ROBES ON MNIST CONVNETIn this section we run the MNIST convolutional model provided by the tensorflow github repo(tensorflow/models/image/mnist/convolutional.py ) We selected that model forreproducibility and to demonstrate how to easily peek into popular models by using probes.5Under review as a conference paper at ICLR 2017We start by sketching the model in Figure 4. We report the results at the beginning and the end oftraining on Figure 5. One of the interesting dynamics to be observed there is how useful the firstlayers are, despite the fact that the model is completely untrained. Random projections can be usefulto classify data, and this has been studied by others (Jarrett et al., 2009).inputimagesconv 5x532 filtersReLUmaxpool2x2conv 5x564 filtersReLUmaxpool2x2matmul ReLU matmuloutputlogitsconvolution layer convolution layer fully-connected layer fully-connected layerFigure 4: This graphical model represents the neural network that we are going to use for MNIST.The model could be written in a more compact form, but we represent it this way to expose all thelocations where we are going to insert probes. The model itself is simply two convolutional layersfollowed by two fully-connected layer (one being the final classifier). However, we insert probes oneach side of each convolution, activation function, and pooling function. This is a bit overzealous,but the small size of the model makes this relatively easy to do.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error(a) After initialization, no training.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error (b) After training for 10 epochs.Figure 5: We represent here the test prediction error for each probe, at the beginning and at theend of training. This measurement was obtained through early stopping based on a validation set of104elements. The probes are prevented from overfitting the training data. We can see that, at thebeginning of training (on the left), the randomly-initialized layers were still providing useful trans-formations. The test prediction error goes from 8% to 2% simply using those random features. Thebiggest impact comes from the first ReLU. At the end of training (on the right), the test predictionerror is improving at every layer (with the exception of a minor kink on fc1preact ).3.5 P ROBES ON INCEPTION V3We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedyet al., 2015; Russakovsky et al., 2015). This is very similar to what is presented in section 3.4, buton a much larger scale. Due to the challenge presented by this experiment, we were not able to doeverything that we had hoped. We have chosen to put those results in the appendix section A.2.Certain layers of the Inception v3 model have approximately one million features. With 1000classes, this means that some probes can take even more storage space than the whole model it-self. In these cases, one of the creative solutions was to try to use only a random subset of thefeatures. This is discussed in the appendix section A.1.3.6 A UXILIARY LOSS BRANCHES AND SKIP CONNECTIONSHere we investigate two ways to modify a deep model in order to facilitate training. Our goal is notto convince the reader that they should implement these suggestions in their own models. Rather,we want to demonstrate the usefulness of the linear classifier probes as a way to better understandwhat is happening in their deep networks.6Under review as a conference paper at ICLR 2017In both cases we use a toy model with 128 fully-connected layers with 128 hidden units in eachlayer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs.We choose this model because we wanted a pathologically deep model without getting involvedin architecture details. The model is pathological in the sense that smaller models can easily bedesigned to achieve better performance, but also in the sense that the model is so deep that it is veryhard to train it with gradient descent methods. From our experiments, the maximal depth wherethings start to break down was depth 64, hence the choice here of using depth 128.In the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to theloss minimization. They are not probes. This is very similar to what happens in the famous Inceptionmodel where “auxiliary heads” are used (Szegedy et al., 2015). This is illustrated in Figure 6a, andit works nicely. The untrainable model is now made trainable through a judicious use of auxiliaryclassifier losses. The results are shown in Figure 7.In the second scenario, we look at adding a bridge (a skip connection) between layer 0 and layer 64.This means that the input features to layer 64 are obtained by concatenating the output of layer 63with the features of layer 0. The idea here is that we might observe that the model would effectivelytrain a submodel of depth 64, using the skip connection, and shift gears later to use the whole depthof 128 layers. This is illustrated in Figure 6b, and the results are shown in Figure 8. It does not workas expected, but the failure of this approach is visualized very nicely with probes and serves as agreat example of their usefulness in diagnosing problems with models.In both cases, there are two interesting observations that can be made with probes. We refer readerstohttps://youtu.be/x8j4ZHCR2FI for the full videos associated to Figures 5, 7 and 8.Firstly, at the beginning of training, we can see how the raw data is directly useful to perform linearclassification, and how this degrades as more layers are added. In the case of the skip connection inFigure 8, this has the effect of creating two bumps. This is because the layer 64 also has the inputdata as direct parent, so it can fit a probe to that signal.Secondly, the prediction error goes down in all probes during training, but it does so in a way thatstarts with the parents before it spreads to their descendants. This is even more apparent on the fullvideo (instead of the 3 frames provided here). This is a ripple effect, where the prediction error inFigure 6b is visually spreading like a wave from the left of the plot to the right.X H0 H1 ŶŶ-1 Ŷ0 Ŷ1H2 H3Ŷ2 Ŷ3L3H4 H5Ŷ4 Ŷ5H6 H7Ŷ6 Ŷ7L7H8 H9Ŷ8 Ŷ9H10 H11Ŷ10 Ŷ11L11H12 H13Ŷ12 Ŷ13H14 H15Ŷ14 Ŷ15L15(a) Model with 16 layers, one guide at every 4 layers.X H0 H1 H127 ŶŶ-1 Ŷ0 Ŷ1 Ŷ127H64Ŷ64(b) Model with 128 layers. A skip connec-tion goes from the beginning straight to themiddle of the graph.Figure 6: Examples of deep neural network with one probe at every layer (drawn above the graph).We show here the addition of extra components to help training (under the graph, in orange).4 D ISCUSSION AND FUTURE WORKWe have presented more toy models or simple models instead of larger models such as Inceptionv3. In the appendix section A.2 we show an experiment on Inception v3, which proved to be morechallenging than expected. Future work in this domain would involve performing better experimentson a larger scale than small MNIST convnets, but still within a manageable size so we can properlytrain all the probes. This would allow us to produce nice videos showing many training steps insequence.We have received many comments from people who thought about using multi-layer probes. Thiscan be seen as a natural extension of the linear classifier probes. One downside to this idea is that welose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of7Under review as a conference paper at ICLR 2017(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 5000 minibatchesFigure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16layers (refer to simplified sketch in Figure 6a if needed). This loss is added to the usual modelloss at the last layer. We fit a probe at every layer to see how well each layer would perform if itsvalues were used as a linear classifier. We plot the train prediction error associated to all the probes,at three different steps. Before adding those auxiliary losses, the model could not successfully betrained through usual gradient descent methods, but with the addition of those intermediate losses,the model is “guided” to achieve certain partial objectives. This leads to a successful training ofthe complete model. The final prediction error is not impressive, but the model was not designed toachieve state-of-the-art performance.(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 2000 minibatchesFigure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer64 (refer to sketch in Figure 6b if needed). We fit a probe at every layer to see how well eachlayer would perform if its values were used as a linear classifier. We plot the train prediction errorassociated to all the probes, at three different steps. We can see how the model completely ignoreslayers 1-63, even when we train it for a long time. The use of probes allows us to diagnose thatproblem through visual inspection.now we feel that it is premature to start using multi-layer probes. This also leads to the convolutedidea of having a regular probe inside a multi-layer probe.5 C ONCLUSIONIn this paper we introduced the concept of the linear classifier probe as a conceptual tool to betterunderstand the dynamics inside a neural network and the role played by the individual intermediatelayers. We are now able to ask new questions and explore new areas. We have demonstrated howthese probes can be used to identify certain problematic behaviors in models that might not beapparent when we traditionally have access to only the prediction loss and error.We hope that the notions presented in this paper can contribute to the understanding of deep neuralnetworks and guide the intuition of researchers that design them.ACKNOWLEDGMENTSYoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of thefollowing agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu ́ebec,Compute Canada, the Canada Research Chairs and CIFAR.8Under review as a conference paper at ICLR 2017REFERENCESY Bengio, Paolo Frasconi, and P Simard. The problem of learning long-term dependencies inrecurrent networks. In Neural Networks, 1993., IEEE International Conference on , pp. 1183–1188. IEEE, 1993.Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Uni-versit ̈at M ̈unchen , pp. 91, 1991.Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecturefor object recognition? In 2009 IEEE 12th International Conference on Computer Vision , pp.2146–2153. IEEE, 2009.David MacKay. Information Theory, Inference and Learning Algorithms . Cambridge UniversityPress, 2003.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deepneural networks? In Advances in neural information processing systems , pp. 3320–3328, 2014.A A PPENDIXA.1 P ROPOSAL : TRAIN PROBES USING ONLY SUBSETS OF FEATURESOne of the challenges to train on the Inception v3 model is that many of the layers have more than200;000features. This is even worse in the first convolution layers before the pooling operations,where we have around a million features. With 1000 output classes, a probe using 200;000featureshas a weight matrix taking almost 1GB of storage.When using stochastic gradient descent, we require space to store the gradients, and if we use mo-mentum this ends up taking three times the memory on the GPU. This is even worse for RMSProp.Normally this might be acceptable for a model of reasonable size, but this turns into almost 4GBoverhead per probe .We do not have to put a probe at every layer. We can also train probes independently. We can putprobe parameters on the CPU instead of the GPU, if necessary. But when the act of training probesincreases the complexity of the experiment beyond a certain point, the researcher might decide thatthey are not worth the trouble.We propose the following solution : for a given probe, use a fixed random subset of features insteadof the whole set of features.With certain assumptions about the independence of the features and their shared role in predictingthe correct class, we can make certain claims about how few features are actually required to assessthe prediction error of a probe. We thank Yaroslav Bulatov for suggesting this approach.We ran an experiment in which we used data XN(0;ID)whereD= 100;000is the number offeatures. We used K= 1000 classes and we generated the ground truth using a matrix Wof shape(D;K ). To obtain the class of a given x, we simply multiply xTWand take the argmax over the Kcomponents of the result.xN(0;ID)y= arg maxk=1::KxTW[:;k]We selected a matrix Wby drawing all its individual coefficients from a univariate gaussian.9Under review as a conference paper at ICLR 2017Instead of using D= 100;000features, we used instead only 1000 features picked at random. Wetrained a linear classifier on those features and, experimentally, it was relatively easy to achieve a4% error rate on our first try. With all the features, we could achieve a 0% error rate, so 4 % mightnot look great. We have to keep in mind that we have K= 1000 classes so random guesses yield anerror rate of 99.9%.This can reduce the storage cost for a probe from 1GB down to 10MB. The former is hard to justify,and the latter is almost negligible.A.2 P ROBES ON INCEPTION V3We are interested in putting linear classifier probes in the popular Inception v3 model,training on the ImageNet dataset. We used the tensorflow implementation available online(tensorflow/models/inception/inception ) and ran it on one GPU for 2 weeks.As described in section A.1, one of the challenges is that the number of features can be prohibitivelylarge, and we have to consider taking only a subset of the features. In this particular experiment, wehave had the most success by taking 1000 random features for each probe. This gives certain layersan unfair advantage if they start with 4000 features and we kept 1000 , whereas in other cases theprobe insertion point has 426;320features and we keep 1000 . There was no simple “fair” solution.That being said, 13 out of the 17 probes have more than 100;000features, and 11 of those probeshave more than 200;000features, so things were relatively comparable.We put linear classifier probes at certain strategic layers. We represent this using boxes in thefollowing Figure 9. The prediction error of the probe given by the last layer of each box is illustratedby coloring the box. Red is bad (high prediction error) and green/blue is good (low prediction error).We would have liked to have a video to show the evolution of this during training, but this experimenthad to be scaled back due to the large computational demands. We show here the prediction errorsat three moments of training. These correspond roughly to the beginning of training, then after afew days, and finally after a week.10Under review as a conference paper at ICLR 2017Inception v3auxiliary headmain headminibatches0015150.0 1.0probe training prediction errorauxiliary headmain headminibatches050389auxiliary headmain headminibatches100876auxiliary headmain headminibatches308230Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on theImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000features. As expected, at first all the probes have a 100% prediction error, but as training progresseswe see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50%is much better than a random guess. The auxiliary head, shown under the model, was observed tohave a prediction error that was slightly better than the main head. This is not necessarily a conditionthat will hold at the end of training, but merely an observation.11
SyuIBy6me
ryF7rTqgl
ICLR.cc/2017/conference/-/paper546/official/review
{"title": "Linear predictiveness of intermediate layer activations.", "rating": "4: Ok but not good enough - rejection", "review": "The authors propose a method to investigate the predictiveness of intermediate layer activations. To do so, they propose training linear classifiers and evaluate the error on the test set.\n\nThe paper is well motivated and aims to shed some light onto the progress of model training and hopes to provide insights into deep learning architecture design.\n\nThe two main reasons for why the authors decided to use linear probes seem to be:\n- convexity\n- The last layer in the network is (usually) linear\n\nIn the second to last paragraph of page 4 the authors point out that it could happen that the intermediate features are useless for a linear classifier. This is correct and what I consider the main flaw of the paper. I am missing any motivation as to the usefulness of the suggested analysis to architecture design. In fact, the example with the skip connection (Figure 8) seems to suggest that skip connections shouldn't be used. Doesn't that contradict the recent successes of ResNet?\n\nWhile the results are interesting, they aren't particularly surprising and I am failing to see direct applicability to understanding deep models as the authors suggest.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Understanding intermediate layers using linear classifier probes
["Guillaume Alain", "Yoshua Bengio"]
Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems.
["intermediate layers", "models", "probes", "model", "linear classifier probes", "reputation", "black boxes", "new", "better"]
https://openreview.net/forum?id=ryF7rTqgl
https://openreview.net/pdf?id=ryF7rTqgl
https://openreview.net/forum?id=ryF7rTqgl&noteId=SyuIBy6me
Under review as a conference paper at ICLR 2017UNDERSTANDING INTERMEDIATE LAYERSUSING LINEAR CLASSIFIER PROBESGuillaume Alain & Yoshua BengioDepartment of Computer Science and Operations ResearchUniversit ́e de Montr ́ealMontreal, QC. H3C 3J7guillaume.alain.umontreal@gmail.comABSTRACTNeural network models have a reputation for being black boxes. We proposea new method to better understand the roles and dynamics of the intermediatelayers. This has direct consequences on the design of such models and it enablesthe expert to be able to justify certain heuristics (such as adding auxiliary losses inmiddle layers). Our method uses linear classifiers, referred to as “probes”, where aprobe can only use the hidden units of a given intermediate layer as discriminatingfeatures. Moreover, these probes cannot affect the training phase of a model, andthey are generally added after training. They allow the user to visualize the stateof the model at multiple steps of training. We demonstrate how this can be usedto develop a better intuition about models and to diagnose potential problems.1 I NTRODUCTIONThe recent history of deep neural networks features an impressive number of new methods andtechnological improvements to allow the training of deeper and more powerful networks.Despite this, models still have a reputation for being black boxes. Neural networks are criticized fortheir lack of interpretability, which is a tradeoff that we accept because of their amazing performanceon many tasks. Efforts have been made to identify the role played by each layer, but it can be hardto find a meaning to individual layers.There are good arguments to support the claim that the first layers of a convolution network forimage recognition contain filters that are relatively “general”, in the sense that they would workgreat even if we switched to an entirely different dataset of images. The last layers are specific tothe dataset being used, and have to be retrained when using a different dataset. In Yosinski et al.(2014) the authors try to pinpoint the layer at which this transition occurs, but they show that theexact transition is spread across multiple layers.In this paper, we introduce the concept of linear classifier probe , referred to as a “probe” for shortwhen the context is clear. We start from the concept of Shannon entropy , which is the classic way todescribe the information contents of a random variable. We then seek to apply that concept to un-derstand the roles of the intermediate layers of a neural network, to measure how much informationis gained at every layer (answer : technically, none). We argue that it fails to apply, and so wepropose an alternative framework to ask the same question again. This time around, we ask whatwould be the performance of an optimal linear classifier if it was trained on the inputs of a givenlayer from our model. We demonstrate how this powerful concept can be very useful to understandthe dynamics involved in a deep neural network during training and after.2 I NFORMATION THEORYIt was a great discovery when Claude Shannon repurposed the notion of entropy to represent infor-mation contents in a formal way. It laid the foundations for the discipline of information theory. Wewould refer the reader to first chapters of MacKay (2003) for a good exposition on the matter.1Under review as a conference paper at ICLR 2017Naturally, we would like to ask some questions about the information contents of the many layersof convolutional neural networks.What happens when we add more layers?Where does information flow in a neural network with multiple branches?Does having multiple auxiliary losses help? (e.g. Inception model)Intuitively, for a training sample xiwith its associated label yi, a deep model is getting closer to thecorrect answer in the higher layers. It starts with the difficult job of classifying xi, which becomeseasier as the higher layers distill xiinto a representation that is easier to classify. One might betempted to say that this means that the higher layers have more information about the ground truth,but this would be incorrect.Here there is a mismatch between two different concepts of information. The notion of entropy failsto capture the essence of those questions. This is illustrated in a formal way by the Data ProcessingInequality . It states that, for a set of three random variables satisfying the dependencyX!Y!Zthen we have thatI(X;Z)I(X;Y)whereI(X;Y )is the mutual information.Intuitively, this means that the deterministic transformations performed by the many layers of adeep neural network are not adding more information. In the best case, they preserve informationand affect only the representation. But in almost all situations, they lose some information in theprocess.If we distill this further, we can think of the serious mismatch between the two following ideas :Part of the genius of the notion of entropy is that is distills the essence of information to aquantity that does not depend on the particular representation.A deep neural network is a series of simple deterministic transformations that affect therepresentation so that the final layer can be fed to a linear classifier.The former ignores the representation of data, while the latter is an expert in finding good represen-tations. A deaf painter is working on a visual masterpiece to offer to a blind musician who playsmusic for him.We need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitivenotion of information. The role of data representation is important, but we would also argue that wehave to think about this issue as it relates to computational complexity. A linear classifier is basicallythe simplest form of classifier that is neither trivial nor degenerate.We define a new notion of information that depends on our ability to classify features of a givenlayer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and toget potentially interesting answers.We end this section with a conceptual example in Figure 1. If Xcontains an image of the savannah,andY2f0;1grefers to whether it contains a lion or not, then none of the subsequent layers aretruly more informative than Xitself. The raw bits from the picture file contain everything.3 L INEAR CLASSIFIER PROBESIn section 3.1 we present the main concept of this paper. We illustrate the concept in section 3.3.We then present a basic experiment in section 3.4. In section 3.6 we modify a very deep networkin two different ways and we show how probes allow us to visualize the consequences (sometimesdisastrous) of our design choices.2Under review as a conference paper at ICLR 2017(a) hex dump of picture of a lion(b) same lion in human-readable formatFigure 1: The hex dump represented on the left has more information contents than the imageon the right. Only one of them can be processed by the human brain in time to save their lives.Computational convenience matters. Not just entropy.3.1 P ROBESAs we discussed the previous section, there is indeed a good reason to use many deterministic layers,and it is because they perform useful transformations to the data with the goal of ultimately fitting alinear classifier at the very end . That is the purpose of the many layers. They are a tool to transformdata into a form to be fed to a boring linear classifier.With this in mind, it is natural to ask if that transformation is sudden or progressive, and whether theintermediate layers already have a representation that is immediately useful to a linear classifier. Werefer the reader to Figure 2 for a diagram of probes being inserted in the usual deep neural network.X H0 H1 HK ŶŶ-1 Ŷ0 Ŷ1 ŶKFigure 2: Probes being added to every layer of a model. These additional probes are not supposedto change the training of the model, so we add a little diode symbol through the arrows to indicatethat the gradients will not backpropagate through those connections.The conceptual framework that we propose is one where the intuitive notion of information is equiv-alent with immediate suitability for a linear classifier (instead of being related to entropy).Just to be absolutely clear about what we call a linear classifier , we mean a functionf:H![0;1]Dh7!softmax (Wh+b):whereh2Hare the features of some hidden layer, [0;1]Dis the space of one-hot encodings of theDtarget classes, and (W;b)are the probe weights and biases to be learned so as to minimize theusual cross-entropy loss.Over the course of training a model, the parameters of the model change. However, probes onlymake sense when we refer to a given training step. We can talk about the probes at iteration noftraining, when the model parameters are n.These parameters are not affected by the probes.We prevent backpropagation through the model either by stopping the gradient flow (done withtf.stop gradient in tensorflow), or simply by specifying that the only variables to be updatedare the probe parameters, while we keep nfrozen.3Under review as a conference paper at ICLR 20173.1.1 T RAINING THE PROBESFor the purposes of this paper, we train the probes up to convergence with fixed model parameters,and we report the prediction error on the training set.It is absolutely possible to train the probes simulatenously while training the model itself. This is agood approach if we consider about how long it can take to train the model. However, this createsa potential problem if we optimize the loss of the model more quickly than the loss of the probes.This can present a skewed view of the actual situation that we would have if we trained the probesuntil convergence before updating the model parameters. If we accept this trade off, then we cantrain the probes at the same time as the model.In some situations, the probes might overfit the training set, so we may want to do early stopping onthe validation set and report the performance for the probes on the test set. This is what we do insection 3.4 with the simple MNIST convnet.We are still unsure if one of those variations should be preferred in general, and right now they allseem acceptable so long as we interpret the probe measurements properly.Note that training those probes represents a convex optimization problem. In practice, this doesmean guarantee that they are easy to train. However, it is reassuring because it means that probestaken at time ncan be used as initialization for probes at time n+1.We use cross-entropy as probe loss because all models studied here used cross-entropy. Other alter-native losses could be justified in other settings.3.2 P ROBES ON BIFURCATING TOY MODELHere we show a hypothetical example in which a model contains a bifurcation with two paths thatlater recombine. We are interested in knowing whether those two branches are useful, or whetherone is potentially redundant or useless.Xconcatconcatprobe prediction error0.750.600.45ŶFor example, the two different branches might contain convolutional layers with different dimen-sions. They may have a different number of sublayers, or one might represent a skip connection.We assume that the branches are combined through concatenation of their features, so that nothingis lost.For this hypothetical situation, we indicate the probe prediction errors on the graphical model. Theupper path has a prediction error of 0:75, the lower path has 0:60, and their combination has 0:45.Small errors are preferred. Although the upper path has “less information” than the lower path, wecan see here that it is not redundant information, because when we concatenate the features of thetwo branches we get a prediction error of 0:45<0:60.If the concatenated layer had a prediction error of 0:60instead of 0:45, then we could declare thatthe above branch did nothing useful. It may have nonzero weights, but it’s still useless.Naturally, this kind of conclusion might be entirely wrong. It might be the case that the branchabove contains very meaningful features, and they simply happen to be useless to a linear classifierapplied right there. The idea of using linear classification probes to understand the roles of differentbranches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimizedperfectly, the conclusions drawn can be misleading.Note that we are reporting here the prediction errors, and it might be the case that the loss is indeedlower when we concatenate the two branches, but for some reason it could fail to apply to theprediction error.4Under review as a conference paper at ICLR 20170 5 10 15 20 25 30 35linear probe at layer k0.00.10.20.30.40.5optimal prediction errorFigure 3: Toy experiment described in section 3.3,with linearly separable data (two labels), an un-trained MLP with 32 layers, and probes at ev-ery layer. We report the prediction error for ev-ery probe, where 0:50would be the performenceof a coin flip and 0:00would be ideal. Note thatthe layer 0here corresponds to the raw data, andthe probes are indeed able to classify it perfectly.As expected, performance degrades when apply-ing random transformations. If many more layerswere present, it would be hard to imagine how thefinal layer (with the model loss) can get any usefulsignal to backpropagate.3.3 P ROBES ON UNTRAINED MODELWe start with a toy example to illustrate what kind of plots we expect from probes. We use a 32-layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU( 0:5)as activation function.We will run the same experiment 100times, with a different toy dataset each time. The goal is to usea data distribution (X;Y )whereX2R128is drawnN(0;I)and whereY2f 1;1gin linearlyseparable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick aw2R128for each experiment, and let the label ynbe the sign of xTnw.We initialize this 32-layer MLP using glorot normal initialization, we do not perform any trainingon the model, and we add one probe at every layer. We optimize the probes with RMSProp and asufficiently small learning rate.In Figure 3, we show the prediction error rate for every probe, averaged over the 100experiments.The graph includes a probe applied directly on the inputs X, where we naturally have an error ratethat is essentially zero (to be expected by the way we constructed our data), and which serves as akind of sanity check. Given that we have only two possible labels, we also show a dotted horizontalline at 0:50, which is essentially the prediction error that we would get by flipping a coin. We cansee that the prediction error rate climbs up towards 0:50as we go deeper in the MLP (with untrainedparameters).This illustrates the idea that the input signal is getting mangled by the successive layers, so muchthat it becomes rather useless by the time we reach the final layer. We checked the mean activationnorm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for thedegradation. Note that this situation could be avoided by using orthogonal weights.One of the popular explanation for training difficulties in very deep models is that of the explod-ing/vanishing (Hochreiter, 1991; Bengio et al., 1993). Here we would like to offer another comple-mentary explanation, based on the observations from Figure 3. That is, at the beginning of training,the usefulness of layers decays as we go deeper, reaching the point where the deeper layers areutterly useless. The values contained in the last layer are then used in the final softmax classifier,and the loss backpropagates the values of the derivatives. Since that derivative is based on garbageactivations, the backpropagated quantities are also garbage, which means that the weights are allgoing to be updated based on garbage. The weights stay bad, and we fail to train the model. Theauthors like to refer to that phenomenon as garbage forwardprop, garbage backprop , in reference tothe popular concept of garbage in, garbage out in computer science.3.4 P ROBES ON MNIST CONVNETIn this section we run the MNIST convolutional model provided by the tensorflow github repo(tensorflow/models/image/mnist/convolutional.py ) We selected that model forreproducibility and to demonstrate how to easily peek into popular models by using probes.5Under review as a conference paper at ICLR 2017We start by sketching the model in Figure 4. We report the results at the beginning and the end oftraining on Figure 5. One of the interesting dynamics to be observed there is how useful the firstlayers are, despite the fact that the model is completely untrained. Random projections can be usefulto classify data, and this has been studied by others (Jarrett et al., 2009).inputimagesconv 5x532 filtersReLUmaxpool2x2conv 5x564 filtersReLUmaxpool2x2matmul ReLU matmuloutputlogitsconvolution layer convolution layer fully-connected layer fully-connected layerFigure 4: This graphical model represents the neural network that we are going to use for MNIST.The model could be written in a more compact form, but we represent it this way to expose all thelocations where we are going to insert probes. The model itself is simply two convolutional layersfollowed by two fully-connected layer (one being the final classifier). However, we insert probes oneach side of each convolution, activation function, and pooling function. This is a bit overzealous,but the small size of the model makes this relatively easy to do.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error(a) After initialization, no training.inputconv1_preact conv1_postact conv1_postpoolconv2_preact conv2_postact conv2_postpoolfc1_preact fc1_postactlogits0.000.020.040.060.080.10test prediction error (b) After training for 10 epochs.Figure 5: We represent here the test prediction error for each probe, at the beginning and at theend of training. This measurement was obtained through early stopping based on a validation set of104elements. The probes are prevented from overfitting the training data. We can see that, at thebeginning of training (on the left), the randomly-initialized layers were still providing useful trans-formations. The test prediction error goes from 8% to 2% simply using those random features. Thebiggest impact comes from the first ReLU. At the end of training (on the right), the test predictionerror is improving at every layer (with the exception of a minor kink on fc1preact ).3.5 P ROBES ON INCEPTION V3We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedyet al., 2015; Russakovsky et al., 2015). This is very similar to what is presented in section 3.4, buton a much larger scale. Due to the challenge presented by this experiment, we were not able to doeverything that we had hoped. We have chosen to put those results in the appendix section A.2.Certain layers of the Inception v3 model have approximately one million features. With 1000classes, this means that some probes can take even more storage space than the whole model it-self. In these cases, one of the creative solutions was to try to use only a random subset of thefeatures. This is discussed in the appendix section A.1.3.6 A UXILIARY LOSS BRANCHES AND SKIP CONNECTIONSHere we investigate two ways to modify a deep model in order to facilitate training. Our goal is notto convince the reader that they should implement these suggestions in their own models. Rather,we want to demonstrate the usefulness of the linear classifier probes as a way to better understandwhat is happening in their deep networks.6Under review as a conference paper at ICLR 2017In both cases we use a toy model with 128 fully-connected layers with 128 hidden units in eachlayer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs.We choose this model because we wanted a pathologically deep model without getting involvedin architecture details. The model is pathological in the sense that smaller models can easily bedesigned to achieve better performance, but also in the sense that the model is so deep that it is veryhard to train it with gradient descent methods. From our experiments, the maximal depth wherethings start to break down was depth 64, hence the choice here of using depth 128.In the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to theloss minimization. They are not probes. This is very similar to what happens in the famous Inceptionmodel where “auxiliary heads” are used (Szegedy et al., 2015). This is illustrated in Figure 6a, andit works nicely. The untrainable model is now made trainable through a judicious use of auxiliaryclassifier losses. The results are shown in Figure 7.In the second scenario, we look at adding a bridge (a skip connection) between layer 0 and layer 64.This means that the input features to layer 64 are obtained by concatenating the output of layer 63with the features of layer 0. The idea here is that we might observe that the model would effectivelytrain a submodel of depth 64, using the skip connection, and shift gears later to use the whole depthof 128 layers. This is illustrated in Figure 6b, and the results are shown in Figure 8. It does not workas expected, but the failure of this approach is visualized very nicely with probes and serves as agreat example of their usefulness in diagnosing problems with models.In both cases, there are two interesting observations that can be made with probes. We refer readerstohttps://youtu.be/x8j4ZHCR2FI for the full videos associated to Figures 5, 7 and 8.Firstly, at the beginning of training, we can see how the raw data is directly useful to perform linearclassification, and how this degrades as more layers are added. In the case of the skip connection inFigure 8, this has the effect of creating two bumps. This is because the layer 64 also has the inputdata as direct parent, so it can fit a probe to that signal.Secondly, the prediction error goes down in all probes during training, but it does so in a way thatstarts with the parents before it spreads to their descendants. This is even more apparent on the fullvideo (instead of the 3 frames provided here). This is a ripple effect, where the prediction error inFigure 6b is visually spreading like a wave from the left of the plot to the right.X H0 H1 ŶŶ-1 Ŷ0 Ŷ1H2 H3Ŷ2 Ŷ3L3H4 H5Ŷ4 Ŷ5H6 H7Ŷ6 Ŷ7L7H8 H9Ŷ8 Ŷ9H10 H11Ŷ10 Ŷ11L11H12 H13Ŷ12 Ŷ13H14 H15Ŷ14 Ŷ15L15(a) Model with 16 layers, one guide at every 4 layers.X H0 H1 H127 ŶŶ-1 Ŷ0 Ŷ1 Ŷ127H64Ŷ64(b) Model with 128 layers. A skip connec-tion goes from the beginning straight to themiddle of the graph.Figure 6: Examples of deep neural network with one probe at every layer (drawn above the graph).We show here the addition of extra components to help training (under the graph, in orange).4 D ISCUSSION AND FUTURE WORKWe have presented more toy models or simple models instead of larger models such as Inceptionv3. In the appendix section A.2 we show an experiment on Inception v3, which proved to be morechallenging than expected. Future work in this domain would involve performing better experimentson a larger scale than small MNIST convnets, but still within a manageable size so we can properlytrain all the probes. This would allow us to produce nice videos showing many training steps insequence.We have received many comments from people who thought about using multi-layer probes. Thiscan be seen as a natural extension of the linear classifier probes. One downside to this idea is that welose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of7Under review as a conference paper at ICLR 2017(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 5000 minibatchesFigure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16layers (refer to simplified sketch in Figure 6a if needed). This loss is added to the usual modelloss at the last layer. We fit a probe at every layer to see how well each layer would perform if itsvalues were used as a linear classifier. We plot the train prediction error associated to all the probes,at three different steps. Before adding those auxiliary losses, the model could not successfully betrained through usual gradient descent methods, but with the addition of those intermediate losses,the model is “guided” to achieve certain partial objectives. This leads to a successful training ofthe complete model. The final prediction error is not impressive, but the model was not designed toachieve state-of-the-art performance.(a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 2000 minibatchesFigure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer64 (refer to sketch in Figure 6b if needed). We fit a probe at every layer to see how well eachlayer would perform if its values were used as a linear classifier. We plot the train prediction errorassociated to all the probes, at three different steps. We can see how the model completely ignoreslayers 1-63, even when we train it for a long time. The use of probes allows us to diagnose thatproblem through visual inspection.now we feel that it is premature to start using multi-layer probes. This also leads to the convolutedidea of having a regular probe inside a multi-layer probe.5 C ONCLUSIONIn this paper we introduced the concept of the linear classifier probe as a conceptual tool to betterunderstand the dynamics inside a neural network and the role played by the individual intermediatelayers. We are now able to ask new questions and explore new areas. We have demonstrated howthese probes can be used to identify certain problematic behaviors in models that might not beapparent when we traditionally have access to only the prediction loss and error.We hope that the notions presented in this paper can contribute to the understanding of deep neuralnetworks and guide the intuition of researchers that design them.ACKNOWLEDGMENTSYoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of thefollowing agencies for research funding and computing support: NSERC, FQRNT, Calcul Qu ́ebec,Compute Canada, the Canada Research Chairs and CIFAR.8Under review as a conference paper at ICLR 2017REFERENCESY Bengio, Paolo Frasconi, and P Simard. The problem of learning long-term dependencies inrecurrent networks. In Neural Networks, 1993., IEEE International Conference on , pp. 1183–1188. IEEE, 1993.Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Uni-versit ̈at M ̈unchen , pp. 91, 1991.Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecturefor object recognition? In 2009 IEEE 12th International Conference on Computer Vision , pp.2146–2153. IEEE, 2009.David MacKay. Information Theory, Inference and Learning Algorithms . Cambridge UniversityPress, 2003.Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y.Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 1–9, 2015.Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deepneural networks? In Advances in neural information processing systems , pp. 3320–3328, 2014.A A PPENDIXA.1 P ROPOSAL : TRAIN PROBES USING ONLY SUBSETS OF FEATURESOne of the challenges to train on the Inception v3 model is that many of the layers have more than200;000features. This is even worse in the first convolution layers before the pooling operations,where we have around a million features. With 1000 output classes, a probe using 200;000featureshas a weight matrix taking almost 1GB of storage.When using stochastic gradient descent, we require space to store the gradients, and if we use mo-mentum this ends up taking three times the memory on the GPU. This is even worse for RMSProp.Normally this might be acceptable for a model of reasonable size, but this turns into almost 4GBoverhead per probe .We do not have to put a probe at every layer. We can also train probes independently. We can putprobe parameters on the CPU instead of the GPU, if necessary. But when the act of training probesincreases the complexity of the experiment beyond a certain point, the researcher might decide thatthey are not worth the trouble.We propose the following solution : for a given probe, use a fixed random subset of features insteadof the whole set of features.With certain assumptions about the independence of the features and their shared role in predictingthe correct class, we can make certain claims about how few features are actually required to assessthe prediction error of a probe. We thank Yaroslav Bulatov for suggesting this approach.We ran an experiment in which we used data XN(0;ID)whereD= 100;000is the number offeatures. We used K= 1000 classes and we generated the ground truth using a matrix Wof shape(D;K ). To obtain the class of a given x, we simply multiply xTWand take the argmax over the Kcomponents of the result.xN(0;ID)y= arg maxk=1::KxTW[:;k]We selected a matrix Wby drawing all its individual coefficients from a univariate gaussian.9Under review as a conference paper at ICLR 2017Instead of using D= 100;000features, we used instead only 1000 features picked at random. Wetrained a linear classifier on those features and, experimentally, it was relatively easy to achieve a4% error rate on our first try. With all the features, we could achieve a 0% error rate, so 4 % mightnot look great. We have to keep in mind that we have K= 1000 classes so random guesses yield anerror rate of 99.9%.This can reduce the storage cost for a probe from 1GB down to 10MB. The former is hard to justify,and the latter is almost negligible.A.2 P ROBES ON INCEPTION V3We are interested in putting linear classifier probes in the popular Inception v3 model,training on the ImageNet dataset. We used the tensorflow implementation available online(tensorflow/models/inception/inception ) and ran it on one GPU for 2 weeks.As described in section A.1, one of the challenges is that the number of features can be prohibitivelylarge, and we have to consider taking only a subset of the features. In this particular experiment, wehave had the most success by taking 1000 random features for each probe. This gives certain layersan unfair advantage if they start with 4000 features and we kept 1000 , whereas in other cases theprobe insertion point has 426;320features and we keep 1000 . There was no simple “fair” solution.That being said, 13 out of the 17 probes have more than 100;000features, and 11 of those probeshave more than 200;000features, so things were relatively comparable.We put linear classifier probes at certain strategic layers. We represent this using boxes in thefollowing Figure 9. The prediction error of the probe given by the last layer of each box is illustratedby coloring the box. Red is bad (high prediction error) and green/blue is good (low prediction error).We would have liked to have a video to show the evolution of this during training, but this experimenthad to be scaled back due to the large computational demands. We show here the prediction errorsat three moments of training. These correspond roughly to the beginning of training, then after afew days, and finally after a week.10Under review as a conference paper at ICLR 2017Inception v3auxiliary headmain headminibatches0015150.0 1.0probe training prediction errorauxiliary headmain headminibatches050389auxiliary headmain headminibatches100876auxiliary headmain headminibatches308230Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on theImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000features. As expected, at first all the probes have a 100% prediction error, but as training progresseswe see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50%is much better than a random guess. The auxiliary head, shown under the model, was observed tohave a prediction error that was slightly better than the main head. This is not necessarily a conditionthat will hold at the end of training, but merely an observation.11
rkDtPG7Ee
SkBsEQYll
ICLR.cc/2017/conference/-/paper85/official/review
{"title": "marginal novelty", "rating": "2: Strong rejection", "review": "this paper proposes to use feed-forward neural networks to learn similarity preserving embeddings. They also use the proposed idea to represent out-of-vocabulary words using the words in given context. \n\nFirst, considering the related work [1,2] the proposed approach brings marginal novelty. Especially\nContext Encoders is just a small improvement over word2vec. \n\nExperimental setup should provide more convincing results other than visualizations and non-standard benchmark for NER evaluation with word vectors [3].\n\n[1] http://papers.nips.cc/paper/5477-scalable-non-linear-learning-with-adaptive-polynomial-expansions.pdf\n[2] http://deeplearning.cs.cmu.edu/pdfs/OJA.pca.pdf\n[3] http://www.anthology.aclweb.org/P/P10/P10-1040.pdf", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning similarity preserving representations with neural similarity and context encoders
["Franziska Horn", "Klaus-Robert M\u00fcller"]
We introduce similarity encoders (SimEc), which learn similarity preserving representations by using a feed-forward neural network to map data into an embedding space where the original similarities can be approximated linearly. The model can easily compute representations for novel (out-of-sample) data points, even if the original pairwise similarities of the training set were generated by an unknown process such as human ratings. This is demonstrated by creating embeddings of both image and text data. Furthermore, the idea behind similarity encoders gives an intuitive explanation of the optimization strategy used by the continuous bag-of-words (CBOW) word2vec model trained with negative sampling. Based on this insight, we define context encoders (ConEc), which can improve the word embeddings created with word2vec by using the local context of words to create out-of-vocabulary embeddings and representations for words with multiple meanings. The benefit of this is illustrated by using these word embeddings as features in the CoNLL 2003 named entity recognition task.
["Natural language processing", "Unsupervised Learning", "Supervised Learning"]
https://openreview.net/forum?id=SkBsEQYll
https://openreview.net/pdf?id=SkBsEQYll
https://openreview.net/forum?id=SkBsEQYll&noteId=rkDtPG7Ee
Under review as a conference paper at ICLR 2017LEARNING SIMILARITY PRESERVING REPRESENTA -TIONS WITH NEURAL SIMILARITY AND CONTEXT EN -CODERSFranziska Horn & Klaus-Robert MüllerMachine Learning GroupTechnische Universität BerlinBerlin, Germanyfranziska.horn@campus.tu-berlin.deklaus-robert.mueller@tu-berlin.deABSTRACTWe introduce similarity encoders (SimEc), which learn similarity preserving repre-sentations by using a feed-forward neural network to map data into an embeddingspace where the original similarities can be approximated linearly. The model caneasily compute representations for novel (out-of-sample) data points, even if theoriginal pairwise similarities of the training set were generated by an unknownprocess such as human ratings. This is demonstrated by creating embeddingsof both image and text data. Furthermore, the idea behind similarity encodersgives an intuitive explanation of the optimization strategy used by the continuousbag-of-words (CBOW) word2vec model trained with negative sampling. Basedon this insight, we define context encoders (ConEc), which can improve the wordembeddings created with word2vec by using the local context of words to createout-of-vocabulary embeddings and representations for words with multiple mean-ings. The benefit of this is illustrated by using these word embeddings as featuresin the CoNLL 2003 named entity recognition task.1 I NTRODUCTIONMany dimensionality reduction or manifold learning algorithms optimize for retaining the pairwisesimilarities, distances, or local neighborhoods of data points. Classical scaling (Cox & Cox, 2000),kernel PCA (Schölkopf et al., 1998), isomap (Tenenbaum et al., 2000), and LLE (Roweis & Saul,2000) achieve this by performing an eigendecomposition of some similarity matrix to obtain a lowdimensional representation of the original data. However, this is computationally expensive if a lotof training examples are available. Additionally, out-of-sample representations can only be createdwhen the similarities to the original training examples can be computed (Bengio et al., 2004).For some methods such as t-SNE (van der Maaten & Hinton, 2008), great effort was put into extendingthe algorithm to work with large datasets (van der Maaten, 2013) or to provide an explicit mappingfunction which can be applied to new data points (van der Maaten, 2009). Current attempts at findinga more general solution to these issues are complex and require the development of specific costfunctions and constraints when used in place of existing algorithms (Bunte et al., 2012), which limitstheir applicability to new objectives.In this paper we introduce a new neural network architecture, that we will denote as similarityencoder (SimEc), which is able to learn representations that can retain arbitrary pairwise relationspresent in the input space, even those obtained from unknown similarity functions such as humanratings. A SimEc can learn a linear or non-linear mapping function to project new data points into alower dimensional embedding space. Furthermore, it can take advantage of large datasets since theobjective function is optimized iteratively using stochastic mini-batch gradient descent. We show onboth image and text datasets that SimEcs can, on the one hand, recreate solutions found by traditionalmethods such as kPCA or isomap, and, on the other hand, obtain meaningful embeddings fromsimilarities based on human labels.1Under review as a conference paper at ICLR 2017Additionally, we propose the new context encoder (ConEc) model, a variation of similarity encodersfor learning word embeddings, which extends word2vec (Mikolov et al., 2013b) by using the localcontext of words as input to the neural network to create representations for out-of-vocabularywords and to distinguish between multiple meanings of words. This is shown to be advantageous,for example, if the word embeddings are used as features in a named entity recognition task asdemonstrated on the CoNLL 2003 challenge.2 S IMILARITY ENCODERSWe propose a novel dimensionality reduction framework termed similarity encoder (SimEc), whichcan be used to learn a linear or non-linear mapping function for computing low dimensional represen-tations of data points such that the original pairwise similarities between the data points in the inputspace are preserved in the embedding space. For this, we borrow the “bottleneck” neural network(NN) architecture idea from autoencoders (Tishby et al., 2000; Hinton & Salakhutdinov, 2006). Au-toencoders aim to transform the high dimensional data points into low dimensional embeddings suchthat most of the data’s variance is retained. Their network architecture has two parts: The first part ofthe network maps the data points from the original feature space to the low dimensional embedding(at the bottleneck). The second part of the NN mirrors the first part and projects the embeddingback to a high dimensional output. This output is then compared to the original input to computethe reconstruction error of the training samples, which is used in the backpropagation procedure totune the network’s parameters. After the training is complete, i.e. the low dimensional embeddingsencode enough information about the original input samples to allow for their reconstruction, thesecond part of the network is discarded and only the first part is used to project data points into thelow dimensional embedding space. Similarity encoders have a similar two fold architecture, wherein the first part of the network, the data is mapped to a low dimensional embedding, and then inthe second part (which is again only used during training), the embedding is transformed such thatthe error of the representation can be computed. However, since here the objective is to retain the(non-linear) pairwise similarities instead of the data’s variance, the second part of the NN does notmirror the first like it does in the autoencoder architecture.InputEmbedding(bottleneck)OutputTargetFeed ForwardNNxi2RDyi2Rdsi2RNs02RNW12Rd⇥N,Figure 1: Similarity encoder (SimEc) architecture.The similarity encoder architecture (Figure 1) uses as the first part of the network a flexible non-linearfeed-forward neural network to map the high dimensional input data points xi2RDto a lowdimensional embedding yi2Rd(at the bottleneck). As we make no assumptions on the range ofvalues the embedding can take, the last layer of the first part of the NN (i.e. the one resulting inthe embedding) is always linear. For example, with two additional non-linear hidden layers, theembedding would be computed asyi=1(0(xiW0)W1)W2;where0and1denote your choice of non-linear activation functions (e.g. tanh, sigmoid, or relu),but there is no non-linearity applied after multiplying with W2. The second part of the network then2Under review as a conference paper at ICLR 2017consists of a single additional layer with the weight matrix W12RdNto project the embeddingto the output, the approximated similarities s02RN:s0=1(yiW1):These approximated similarities are then compared to the target similarities (for one data point this isthe corresponding row si2RNof the similarity matrix S2RNNof theNtraining samples) andthe computed error is used to tune the network’s parameters with backpropagation.For the model to learn most efficiently, the exact form of the cost function to optimize as well asthe type of non-linearity 1applied when computing the network’s output should be chosen withrespect to the type of target similarities that the model is supposed to preserve. In the experimentalsection of the paper we are considering two application scenarios of SimEcs: a) to obtain the samelow dimensional embedding as found by spectral methods such as kPCA, and b) to embed data pointssuch that binary similarity relations obtained from human labels are preserved.In the first case (further discussed in the next section), we omit the non-linearity when computingthe output of the network, i.e. s0=yiW1, since the target similarities, computed by some kernelfunction, are not necessarily constrained to lie in a specific interval. As the cost function to minimizewe choose the mean squared error between the output (approximated similarities) and the original(target) similarities. A regularization term is added to encourage the weights of the last layer ( W1)to be orthogonal.1The model’s objective function optimized during training is therefore:min1NNXi=1ksis0k22+1d2dW1W>1diag(W1W>1)1wherekkpdenotes the respective p-norms for vectors and matrices and is a hyperparameter tocontrol the strength of the regularization.In the second case, the target similarities are binary and it therefore makes sense to use a non-linear activation function in the final layer when computing the output of the network to ensure theapproximated similarities are between 0and1as well:2s0=1(yiW1)with1(z) =11 +e10(z0:5):While the mean squared error between the target and approximated similarities would still be a naturalchoice of cost function to optimize, with the additional non-linearity in the output layer, learningmight be slow due to small gradients and we therefore instead optimize the cross-entropy:min1NX[siln(s0) + (1si) ln(1s0)]:For a different application scenario, yet another setup might lead to the best results. When usingSimEcs in practice, we recommend to first try the first setup, i.e. keeping the output layer linear andminimizing the mean squared error, as this often already gives quite good results.After the training is completed, only the first part of the neural network, which maps the input to theembedding, is used to create the representations of new data points. Depending on the complexity ofthe feed-forward NN, the mapping function learned by similarity encoders can be linear or non-linear,and because of the iterative optimization using stochastic mini-batch gradient descent, large amountsof data can be utilized to learn optimal representations.32.1 R ELATION TO KERNEL PCAKernel PCA (kPCA) is a popular non-linear dimensionality reduction algorithm, which performs theeigendecomposition of a kernel matrix to obtain low dimensional representations of the data points1To get embeddings similar to those obtained by kPCA, orthogonal weights in the last layer of the NN helpas they correspond to the orthogonal eigenvectors of the kernel matrix found by kPCA.2This scaled and shifted sigmoid function maps values between 0 and 1 almost linearly while thresholdingvalues outside this interval.3To speed up the training procedure and limit memory requirements for large datasets, the columns of thesimilarity matrix can also be subsampled (yielding S2RNn), i.e. the number of target similarities (and thedimensionality of the output layer) is n < N , however all Ntraining examples can still be used as input to trainthe network.3Under review as a conference paper at ICLR 2017(Schölkopf et al., 1998). However, if the kernel matrix is very large this becomes computationallyvery expensive. Additionally, there are constraints on possible kernel functions (should be positivesemi-definite) and new data points can only be embedded in the lower dimensional space if theirkernel map (i.e. the similarities to the original training points) can be computed. As we show below,SimEc can optimize the same objective as kPCA but addresses these shortcomings.The general idea is that both kPCA and SimEc embed the Ndata points in a feature space where thegiven target similarities can be approximated linearly (i.e. with the scalar product of the embeddingvectors). When the error between the approximated ( S0) and the target similarities ( S) is computed asthe mean squared error, kPCA finds the optimal approximation by performing the eigendecompositionof the (centered) target similarity matrix, i.e.S0=YY>;whereY2RNdis the low dimensional embedding of the data based on the eigenvectors belongingto thedlargest eigenvalues of S.In addition to the embedding itself, it is often desired to have a parametrized mapping function,which can be used to project new (out-of-sample) data points into the embedding space. If the targetsimilarity matrix is the linear kernel, i.e. S=XX>whereX2RNDis the given input data,this can easily be accomplished with traditional PCA. Here, the covariance matrix of the centeredinput data, i.e. C=X>Xis decomposed to obtain a matrix with parameters, ~W2RDd, based onthe eigenvectors belonging to the dlargest eigenvalues of the covariance matrix. Then the optimalembedding (i.e. the same solution obtained by linear kPCA) can be computed asY=X~W:This serves as a mapping function, with which new data points can be easily projected into the lowerdimensional embedding space.When using a similarity encoder to embed data in a low dimensional space where the linear similaritiesare preserved, the SimEc’s architecture would consist of a neural network with a single linear layer,i.e. the parameter matrix W0, to project the input data Xto the embedding Y=XW 0, and anothermatrixW12RdNused to approximate the similarities asS0=YW1:From these formulas one can immediately see the link between linear similarity encoders and PCA /linear kPCA: once the parameters of the neural network are tuned correctly, W0would correspondto the mapping matrix ~Wfound by PCA and W1could be interpreted as Y>, i.e.Ywould be thesame eigenvector based embedding as found with linear kPCA.Finding the corresponding function to map new data points into the embedding space is trivial forlinear kPCA, but this is not the case for other kernel functions. While it is still possible to findthe optimal embedding with kPCA for non-linear kernel functions, the mapping function remainsunknown and new data points can only be projected into the embedding space if we can computetheir kernel map, i.e. the similarities to the original training examples (Bengio et al., 2004). Someattempts were made to manually define an explicit mapping function to represent data points inthe kernel feature space, however this only works for specific kernels and there exists no generalsolution (Rahimi & Recht, 2007). As neural networks are universal function approximators, with theright architecture similarity encoders could instead learn arbitrary mapping functions for unknownsimilarities to arrive at data driven kernel learning solutions.2.2 M ODEL OVERVIEWThe properties of similarity encoders are summarized in the following. The objective of this dimen-sionality reduction approach is to retain pairwise similarities between data points in the embeddingspace. This is achieved by tuning the parameters of a neural network to obtain a linear or non-linearmapping (depending on the network’s architecture) from the high dimensional input to the lowdimensional embedding. Since the cost function is optimized using stochastic mini-batch gradientdescent, we can take advantage of large datasets for training. The embedding for new test points canbe easily computed with the explicit mapping function in the form of the tuned neural network. Andsince there is no need to compute the similarity of new test examples to the original training data forout-of-sample solutions (like with kPCA), the target similarities can be generated by an unknownprocess such as human similarity judgments.4Under review as a conference paper at ICLR 20172.3 E XPERIMENTSIn the following experiments we demonstrate that similarity encoders can, on the one hand, reach thesame solution as kPCA, and, on the other hand, generate meaningful embeddings from human labels.To illustrate that this is independent of the type of data, we present results obtained both on the wellknown MNIST handwritten digits dataset as well as the 20 newsgroups text corpus. Further details aswell as the code to replicate these experiments and more is available online.4We compare the embedding found with linear kPCA to that created with a linear similarity encoder(consisting of one linear layer mapping the input to the embedding and a second linear layer to projectthe embedding to the output, i.e. computing the approximated similarities). Additionally, we showthat a non-linear SimEc can approximate the solution found with isomap (i.e. the eigendecompositionof the geodesic distance matrix). We found that for optimal results the kernel matrix used as the targetsimilarity matrix for the SimEc should first be centered (as it is being done for kPCA as well (Mülleret al., 2001)).In a second step, we show that SimEcs can learn the mapping to a low dimensional embedding forarbitrary similarity functions and reliably create representations for new test samples without the needto compute their similarities to the original training examples, thereby going beyond the capabilitiesof kPCA. For both datasets we illustrate this by using the class labels assigned to the samples byhuman annotators to create the target similarity matrix for the training fold of the data, i.e. Sis1fordata points belonging to the same class and 0everywhere else. We compare the solutions found bySimEc architectures with a varying number of additional non-linear hidden layers in the first partof the network (while keeping the embedding layer linear as before) to show how a more complexnetwork improves the ability to map the data into an embedding space in which the class-basedsimilarities are retained.MNIST The MNIST dataset contains 2828pixel images depicting handwritten digits. For ourexperiments we randomly subsampled 10k images from all classes, of which 80% are assigned to thetraining fold and the remaining 20% to the test fold (in the following plots, data points belonging tothe training set are displayed transparently while the test points are opaque). As shown in Figure 2,the embeddings of the MNIST dataset created with linear kPCA and a linear similarity encoder,which uses as target similarities the linear kernel matrix, are almost identical (up to a rotation). Thesame holds true for the isomap embedding, which is well approximated by a non-linear SimEc withtwo hidden layers using the geodesic distances between the data points as targets (Figure 8 in theAppendix). When optimizing SimEcs to retain the class-based similarities (Figure 3), additionalFigure 2: MNIST digits visualized in two dimensions by linear kPCA and a linear SimEc.non-linear hidden layers in the feed-forward NN can improve the embedding by further separatingdata points belonging to different classes in tight clusters. As it can be seen, the test points (opaque)are nicely mapped into the same locations as the corresponding training points (transparent), i.e.the model learns to associate the input pixels with the class clusters only based on the imposedsimilarities between the training data points.4https://github.com/cod3licious/simec/examples_simec.ipynb5Under review as a conference paper at ICLR 2017Figure 3: MNIST digits visualized in two dimensions by SimEcs with an increasing number ofnon-linear hidden layers and the objective to retain similarities based on class membership.20 newsgroups The 20 newsgroups dataset consists of around 18k newsgroup posts assigned to20 different topics. We take a subset of seven categories and use the original train/test split ( 4.1kand2.7k samples respectively) and remove metadata such as headers to avoid overfitting.5All textdocuments are transformed into 46k dimensional tf-idf feature vectors, which are used as input tothe SimEc and to compute the linear kernel matrix of the training fold. The embedding created withlinear kPCA is again well approximated by the solution found with a corresponding linear SimEc(Figure 9 in the Appendix). Additionally, this serves as an example where traditional PCA is not anoption to obtain the corresponding mapping matrix for the linear kPCA solution, as due to the highdimensionality of the input data and comparatively low number of samples, the empirical covariancematrix would be poorly estimated and too large to decompose into eigenvalues and -vectors. Withthe objective to retain the class-based similarities, a SimEc with a non-linear hidden layer clustersdocuments by their topics (Figure 4).3 C ONTEXT ENCODERSRepresentation learning is very prominent in the field of natural language processing (NLP). Forexample, word embeddings learned by neural network language models were shown to improve theperformance when used as features for supervised learning tasks such as named entity recognition(NER) (Collobert et al., 2011; Turian et al., 2010). The popular word2vec model (Figure 5) learnsmeaningful word embeddings by considering only the words’ local contexts and thanks to its shallowarchitecture it can be trained very efficiently on large corpora. However, an important limiting factorof current word embedding models is that they only learn the representations for words from a fixedvocabulary. This means, if in a task we encounter a new word which was not present in the texts usedfor training, we can not create an embedding for this word without repeating the time consuming5http://scikit-learn.org/stable/datasets/twenty_newsgroups.html6Under review as a conference paper at ICLR 2017Figure 4: 20 newsgroups texts visualized in two dimensions by a non-linear SimEc with one hiddenlayer and the objective to preserve the similarities based on class membership in the embedding.training procedure of the model.6Additionally, word2vec, like many other approaches, only learnsa single representation for every word. However, it is often the case that a single word can havemultiple meanings, e.g. “Washington” is both the name of a US state as well as a former president. Itis only the local context in which these words appear that lets humans resolve this ambiguity andidentify the proper sense of the word in question. While attempts were made to improve this, theylack flexibility as they require a clustering of word contexts beforehand (Huang et al., 2012), whichstill does not guarantee that all possible meanings of a word have been identified prior in the trainingdocuments. Other approaches require additional labels such part-of-speech tags (Trask et al., 2015)or other lexical resources like WordNet (Rothe & Schütze, 2015) to create word embeddings whichdistinguish between the different senses of a word.As a further contribution of this paper we provide a link between the successful word2vec naturallanguage model and similarity encoders and thereby propose a new model we call context encoder(ConEc), which can efficiently learn word embeddings from huge amounts of training data andadditionally make use of local contexts to create representations for out-of-vocabulary words andhelp distinguish between multiple meanings of words.target wordThe black cat slept on the bed. context wordsAfter trainingtarget embedding2R1⇥dTraining phaseW1W0W0N⇥dN⇥dl02R1⇥d1) take sum of context embeddings2) select target and k noise weights (negative sampling)N⇥dl12R(k+1)⇥d3) compute error & backpropagateerr =t(l0·lT1)(z)=11+ezwith:t: binary label vectorFigure 5: Continuous BOW word2vec model trained using negative sampling (Mikolov et al., 2013a;b;Goldberg & Levy, 2014).6In practice these models are trained on such a large vocabulary that it is rare to encounter a word whichdoes not have an embedding. However, there are still scenarios where this is the case, for example, it is unlikelythat the term “W10281545” is encountered in a regular training corpus, but we might still want its embedding torepresent a search query like “whirlpool W10281545 ice maker part”.7Under review as a conference paper at ICLR 2017Formally, word embeddings are d-dimensional vector representations learned for all Nwords inthe vocabulary. Word2vec is a shallow model with parameter matrices W0;W 12RNd, which aretuned iteratively by scanning huge amounts of texts sentence by sentence (see Figure 5). Based onsome context words the algorithm tries to predict the target word between them. Mathematicallythis is realized by first computing the sum of the embeddings of the context words by selecting theappropriate rows from W0. This vector is then multiplied by several rows selected from W1: one ofthese rows corresponds to the target word, while the others correspond to k‘noise’ words, selected atrandom (negative sampling). After applying a non-linear activation function, the backpropagationerror is computed by comparing this output to a label vector t2Rk+1, which is 1 at the position ofthe target word and 0 for all knoise words. After the training of the model is complete, the wordembedding for a target word is the corresponding row of W0.The main principle utilized when learning word embeddings is that similar words appear in similarcontexts (Harris, 1954; Melamud et al., 2015). Therefore, in theory one could compute the similaritiesbetween all words by checking how many context words any two words generally have in common(possibly weighted somehow to reduce the influence of frequent words such as ‘the’ and ‘and’).However, such a word similarity matrix would be very large, as typically the vocabulary for whichword embeddings are learned comprises several 10;000words, making it computationally tooexpensive to be used with similarity encoders. But this matrix would also be quite sparse, becausemany words in fact do not occur in similar contexts and most words only have a handful of synonymswhich could be used in their place. Therefore, we can view the negative sampling approach usedfor word2vec (Mikolov et al., 2013b) as an approximation of the words’ context based similarities:while the similarity of a word to itself is 1, if for one word we select krandom words out of the hugevocabulary, it is very unlikely that they are similar to the target word, i.e. we can approximate theirsimilarities with 0. This is the main insight necessary for adapting similarity encoders to be used forlearning (context sensitive) word embeddings.InputEmbeddingOutputTargetxi2RNyi2Rdsi2Rk+1s02Rk+1theblacksleptoncatFigure 6: Context encoder (ConEc) architecture. The input consists of a context vector, but instead ofcomparing the output to a full similarity vector, only the target word and knoise words are considered.Figure 6 shows the architecture of the context encoder. For the training procedure we stick veryclosely to the optimization strategy used by word2vec: while parsing a document, we again selecta target word and its context words. As input to the context encoder network, we use a vectorxiof lengthN(i.e. the size of the vocabulary), which indicates the context words by non-zerovalues (either binary or e.g. giving lower weight to context words further away from the target word).This vector is then multiplied by a first matrix of weights W02RNdyielding a low dimensionalembeddingyi, comparable to the summed context embedding created as a first step when trainingthe word2vec model. This embedding is then multiplied by a second matrix W12RdNto yieldthe output. Instead of comparing this output vector to a whole row from a word similarity matrix(as we would with similarity encoders), only k+ 1entries are selected, namely those belonging to8Under review as a conference paper at ICLR 2017the target word as well as krandom and unrelated noise words. After applying a non-linearity wecompare these entries s02Rk+1to the binary target vector exactly as in the word2vec model anduse error backpropagation to tune the parameters.Up to now, there are no real differences between the word2vec model and our context encoders, wehave merely provided an intuitive interpretation of the training procedure and objective. The maindeviation from the word2vec model lies in the computation of the word embedding for a target wordafter the training is complete. In the case of word2vec, the word embedding is simply the row ofthe tunedW0matrix. However, when considering the idea behind the optimization procedure, weinstead propose to compute a target word’s representation by multiplying W0with the word’s averagecontext vector. This is closer to what is being done in the training procedure and additionally itenables us to compute the embeddings for out-of-vocabulary words (assuming at least most of sucha new word’s context words are in the vocabulary) as well as to place more emphasis on a word’slocal context (which helps to identify the proper meaning of the word (Melamud et al., 2015)) bycreating a weighted sum between the word’s average global and local context vectors used as input tothe ConEc.With this new perspective on the model and optimization procedure, another advancement is feasible.Since the context words are merely a sparse feature vector used as input to a neural network, thereis no reason why this input vector should not contain other features about the target word as well.For example, the feature vector could be extended to contain information about the word’s case,part-of-speech (POS) tag, or other relevant details. While this would increase the dimensionalityof the first weight matrix W0to include the additional features when mapping the input to theword’s embedding, the training objective and therefore also W1would remain unchanged. Theseadditional features could be especially helpful if details about the words would otherwise get lost inpreprocessing (e.g. by lowercasing) or to retain information about a word’s position in the sentence,which is ignored in a BOW approach. These extended ConEcs are expected to create embeddingswhich distinguish even better between the words’ different senses by taking into account, for example,if the word is used as a noun or verb in the current context, similar to the sense2vec algorithm (Trasket al., 2015). However, unlike sense2vec, not multiple embeddings per term are learned, instead thedimensionality of the input vector is increased to include the POS tag of the current word as a feature.3.1 E XPERIMENTSThe word embeddings learned with word2vec and context encoders are evaluated on a word analogytask (Mikolov et al., 2013a) as well as the CoNLL 2003 NER benchmark task (Tjong et al., 2003).The word2vec model used is a continuous BOW model trained with negative sampling as describedabove where k= 13 , the embedding dimensionality dis200and we use a context window of 5. Theword embeddings created by the context encoders are build directly on top of the word2vec model bymultiplying the original embeddings ( W0) with the respective context vectors. Code to replicate theexperiments can be found online.7The results of the analogy task can be found in the Appendix.8Named Entity Recognition The main advantage of context encoders is that they can use localcontext to create out-of-vocabulary (OOV) embeddings and distinguish between the different sensesof words. The effects of this are most prominent in a task such as named entity recognition (NER)where the local context of a word can make all the difference, e.g. to distinguish between the“Chicago Bears” (an organization) and the city of Chicago (a location). To test this, we used theword embeddings as features in the CoNLL 2003 NER benchmark task (Tjong et al., 2003). Theword2vec embeddings were trained on the documents used in the training part of the task.9For thecontext encoders we experimented with different combinations of local and global context vectors.The global context vectors were computed on only the training documents as well, i.e. just as with7https://github.com/cod3licious/conec8As it was recently demonstrated that a good performance on intrinsic evaluation tasks such as word similarityor analogy tasks does not necessarily transfer to extrinsic evaluation measures when using the word embeddingsas features (Chiu et al., 2016; Linzen, 2016), we consider the performance on the NER challenge as morerelevant.9Since this is a very small corpus, we trained word2vec for 25 iterations on these documents (afterwards theperformance on the development split stopped improving significantly) while usually the model is trained in asingle pass through a much larger corpus.9Under review as a conference paper at ICLR 2017the word2vec model, when applied to the test documents there are some words which don’t have aword embedding available as they did not occur in the training texts. The local context vectors on theother hand can be computed for all words occurring in the current document for which the modelshould identify the named entities. When combining these local context vectors with the global oneswe always use the local context vector as is in case there is no global vector available and otherwisecompute a weighted average between the two context vectors as wlCV local+ (1wl)CV global.10The different word embeddings were used as features with a logistic regression classifier trained onthe labels obtained from the training part of the task and the reported F1-scores were computed usingthe official evaluation script. Please note that we are using this task to show the potential of ConEcword embeddings as features in a real world task and to illustrate their advantages over the regularword2vec embeddings and did not optimize for competitive performance on this NER challenge.global wl=0.wl=0.1wl=0.2wl=0.3wl=0.4wl=0.5wl=0.6wl=0.7wl=0.8wl=0.9 wl=1.202530354045F1-Score [%]NER performance with different word embedding featurestraindevtestABCFigure 7: Results of the CoNLL 2003 NER task based on three random initializations of the word2vecmodel. The overall results are shown on the left, where the mean performance using word2vecembeddings is considered as our baseline indicated by the dashed lines, all other embeddings arecomputed with context encoders using various combinations of the words’ global and local contextvectors. On the right, the increased performance (mean and std) on the test fold achieved by usingConEc is highlighted: Enhancing the word2vec embeddings with global context information yields aperformance gain of 2:5percentage points (A). By additionally using local context vectors to createOOV word embeddings ( wl= 0) we gain another 1:7points (B). When using a combination ofglobal and local context vectors ( wl= 0:4) to distinguish between the different meanings of words,the F1-score increases by another 5:1points (C), yielding a F1-score of 39:92%, which marks asignificant improvement compared to the 30:59% reached with word2vec features.Figure 7 shows the results achieved with various word embeddings on the training, developmentand test part of the CoNLL task. As it can be seen there, taking into account the local context canyield large improvements, especially on the dev and test data. Context encoders using only the globalcontext vectors already perform better than word2vec. When using the local context vectors onlywhere the global ones are not available ( wl= 0) we can see a jump in the development and testperformance, while of course the training performance stays the same as here we have global contextvectors for all words. The best performances on all folds are achieved when averaging the global andlocal context vectors with around wl= 0:4before multiplying them with the word2vec embeddings.This clearly shows that using ConEcs with local context vectors can be very beneficial as they let uscompute word embeddings for out-of-vocabulary words as well as help distinguish between multiplemeanings of words.10The global context matrix is computed without taking the word itself into account (i.e. zero on the diagonal)to make the context vectors comparable to the local context vectors of OOV words where we can’t count thetarget word either. Both global and local context vectors are normalized by their respective maximum values,then multiplied with the length normalized word2vec embeddings and again renormalized to have unit length.10Under review as a conference paper at ICLR 20174 C ONCLUSIONRepresenting intrinsically complex data is an ubiquitous challenge in data analysis. While kernelmethods and manifold learning have made very successful contributions, their ability to scale issomewhat limited. Neural autoencoders offer scalable nonlinear embeddings, but their objective is tominimize the reconstruction error of the input data which does not necessarily preserve importantpairwise relations between data points. In this paper we have proposed SimEcs as a neural networkframework which bridges this gap by optimizing the same objective as spectral methods, such askPCA, for creating similarity preserving embeddings while retaining the favorable properties ofautoencoders.Similarity encoders are a novel method to learn similarity preserving embeddings and can be especiallyuseful when it is computationally infeasible to perform the eigendecomposition of a kernel matrix,when the target similarities are obtained through an unknown process such as human similarityjudgments, or when an explicit mapping function is required. To accomplish this, a feed-forwardneural network is constructed to map the data into an embedding space where the original similaritiescan be approximated linearly.As a second contribution we have defined context encoders, a practical extension of SimEcs, that canbe readily used to enhance the word2vec model with further local context information and global wordstatistics. Most importantly, ConEcs allow to easily create word embeddings for out-of-vocabularywords on the spot and distinguish between different meanings of a word based its local context.Finally, we have demonstrated the usefulness of SimEcs and ConEcs for practical tasks such as thevisualization of data from different domains and to create meaningful word embedding features for aNER task, going beyond the capabilities of traditional methods.Future work will aim to further the theoretical understanding of SimEcs and ConEcs and exploreother application scenarios where using this novel neural network architecture can be beneficial. Asit is often the case with neural network models, determining the optimal architecture as well as otherhyperparameter choices best suited for the task at hand can be difficult. While so far we mainlystudied SimEcs based on fairly simple feed-forward networks, it appears promising to consideralso deeper neural networks and possibly even more elaborate architectures, such as convolutionalnetworks, for the initial mapping step to the embedding space, as in this manner hierarchical structuresin complex data could be reflected. Note furthermore that prior knowledge as well as more generalerror functions could be employed to tailor the embedding to the desired application target(s).ACKNOWLEDGMENTSWe would like to thank Antje Relitz, Christoph Hartmann, Ivana Balaževi ́c, and other anonymousreviewers for their helpful comments on earlier versions of this manuscript. Additionally, FranziskaHorn acknowledges funding from the Elsa-Neumann scholarship from the TU Berlin.REFERENCESYoshua Bengio, Jean-François Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Le Roux, and Marie Ouimet.Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. Advances in neuralinformation processing systems , 16:177–184, 2004.Kerstin Bunte, Michael Biehl, and Barbara Hammer. A general framework for dimensionality-reducing datavisualization mapping. Neural Computation , 24(3):771–804, 2012.Billy Chiu, Anna Korhonen, and Sampo Pyysalo. Intrinsic evaluation of word vectors fails to predict extrinsicperformance. ACL 2016 , pp. 1, 2016.Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Naturallanguage processing (almost) from scratch. The Journal of Machine Learning Research , 12:2493–2537, 2011.Trevor F Cox and Michael AA Cox. Multidimensional scaling . CRC Press, 2000.Yoav Goldberg and Omer Levy. word2vec explained: Deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722 , 2014.Zellig S Harris. Distributional structure. Word , 10(2-3):146–162, 1954.11Under review as a conference paper at ICLR 2017Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.Science , 313(5786):504–507, 2006.Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations viaglobal context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Associationfor Computational Linguistics: Long Papers-Volume 1 , pp. 873–882. ACL, 2012.Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from wordembeddings. Transactions of the Association for Computational Linguistics , 3:211–225, 2015.Tal Linzen. Issues in evaluating semantic spaces using word analogies. arXiv preprint arXiv:1606.07736 , 2016.Oren Melamud, Ido Dagan, and Jacob Goldberger. Modeling word meaning in context with substitute vectors.InHuman Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL ,2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations invector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of wordsand phrases and their compositionality. In Advances in neural information processing systems , pp. 3111–3119,2013b.Klaus-Robert Müller, Sebastian Mika, Gunnar Rätsch, Koji Tsuda, and Bernhard Schölkopf. An introduction tokernel-based learning algorithms. Neural Networks, IEEE Transactions on , 12(2):181–201, 2001.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for Word Representa-tion. In EMNLP , volume 14, pp. 1532–1543, 2014.Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neuralinformation processing systems , pp. 1177–1184, 2007.Sascha Rothe and Hinrich Schütze. Autoextend: Extending word embeddings to embeddings for synsets andlexemes. arXiv preprint arXiv:1507.01127 , 2015.Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science ,290(5500):2323–2326, 2000.Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kerneleigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlineardimensionality reduction. Science , 290(5500):2319–2323, 2000.Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprintphysics/0004057 , 2000.EF Tjong, Kim Sang, and F De Meulder. Introduction to the CoNLL-2003 Shared Task: Language-IndependentNamed Entity Recognition. In Walter Daelemans and Miles Osborne (eds.), Proceedings of CoNLL-2003 , pp.142–147. Edmonton, Canada, 2003.Andrew Trask, Phil Michalak, and John Liu. sense2vec-a fast and accurate method for word sense disambiguationin neural word embeddings. arXiv preprint arXiv:1511.06388 , 2015.Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method forsemi-supervised learning. In Proceedings of the 48th annual meeting of the association for computationallinguistics , pp. 384–394. Association for Computational Linguistics, 2010.Laurens van der Maaten. Learning a parametric embedding by preserving local structure. In InternationalConference on Artificial Intelligence and Statistics , pp. 384–391, 2009.Laurens van der Maaten. Barnes-Hut-SNE. In Proceedings of the International Conference on LearningRepresentations , 2013.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine LearningResearch , 9(2579-2605):85, 2008.12Under review as a conference paper at ICLR 2017APPENDIXFigure 8: MNIST digits visualized in two dimensions by isomap and a non-linear SimEc.Figure 9: 20 newsgroups dataset embedded with linear kernel PCA and a corresponding linear SimEc.Analogy task To show that the word embeddings created with context encoders capture meaningfulsemantic and syntactic relationships between words, we evaluated them on the original analogy taskpublished together with the word2vec model (Mikolov et al., 2013a).11This task consists of manyquestions in the form of “ man is to king aswoman is to XXX” where the model is supposed to findthe correct answer queen . This is accomplished by taking the word embedding for king, subtractingfrom it the embedding for man and then adding the embedding for woman . This new word vectorshould then be most similar (with respect to the cosine similarity) to the embedding for queen .12The word2vec and corresponding context encoder model are trained for ten iterations on the text8corpus,13which contains around 17 million words and a vocabulary of about 70k unique words, andthe training part of the 1-billion benchmark dataset,14which contains over 768 million wordswith a vocabulary of 486k unique words.15The results of the analogy task are shown in Table 1. To capture some of the semantic relationsbetween words (e.g. the first four task categories) it can be advantageous to use context encoders, i.e.to weight the word2vec embeddings with the words’ average context vectors - however to achieve thebest results we also had to include the target word itself in these context vectors. One reason for theConEcs’ superior performance on some of the task categories but not others might be that the cityand country names compared in the first four task categories only have a single sense (referring to the11See also https://code.google.com/archive/p/word2vec/ .12Readers familiar with Levy et al. (2015) will recognize this as the 3CosAdd method. We have tried 3CosMulas well, but found that the results did not improve significantly and therefore omitted them here.13http://mattmahoney.net/dc/text8.zip14http://code.google.com/p/1-billion-word-language-modeling-benchmark/15In this experiment we ignore all words which occur less than 5 times in the training corpus.13Under review as a conference paper at ICLR 2017Table 1: Accuracy on the analogy task with mean and standard deviation computed using threerandom seeds when initializing the word2vec model. The best results for each category and corpusare in bold.text8 (10 iter) 1-billionword2vec Context Encoder word2vec ConEccapital-common-countries 63.8 4.7 78.70.2 79.32.2 83.11.2capital-world 34.0 2.1 54.71.3 63.81.4 75.90.4currency 15.4 0.9 19.30.6 13.33.6 14.80.8city-in-state 28.6 1.0 43.60.9 19.61.7 29.61.0family 79.61.5 77.20.4 78.72.2 79.01.4gram1-adjective-to-adverb 11.0 0.9 16.60.7 12.30.5 13.31.1gram2-opposite 24.3 3.0 24.32.0 27.60.1 21.31.1gram3-comparative 64.30.5 63.01.1 83.70.9 76.21.1gram4-superlative 40.32.1 37.61.5 69.40.5 56.21.2gram5-present-participle 30.5 1.0 31.70.4 78.41.0 68.00.7gram6-nationality-adjective 70.61.5 67.21.4 83.80.6 83.80.5gram7-past-tense 30.5 1.8 33.00.6 53.90.9 49.20.7gram8-plural 49.80.3 49.21.2 62.71.9 56.71.0gram9-plural-verbs 41.02.5 30.11.9 68.70.2 45.00.4total 42.1 0.6 46.50.1 57.20.3 55.80.3respective location), while the words asked for in other task categories can have multiple meanings,for example “run” is used as both a verb and a noun and in some contexts refers to the sport activitywhile other times it is used in a more abstract sense, e.g. in the context of someone running forpresident. Therefore, the results in the other task categories might improve if the words’ contextvectors are first clustered and then the ConEc embedding is generated by multiplying with the averageof only those context vectors corresponding to the word sense most appropriate for the task category.14
ryTDQsW4g
SkBsEQYll
ICLR.cc/2017/conference/-/paper85/official/review
{"title": "Standard feed-forward neural net with unconvincing experimental results", "rating": "3: Clear rejection", "review": "This paper introduces a similarity encoder based on a standard feed-forward neural network with the aim of generating similarity-preserving embeddings. The approach is utilized to generate a simple extension of the CBOW word2vec model that transforms the learned embeddings by their average context vectors. Experiments are performed on an analogy task and named entity recognition.\n\nWhile this paper offers some reasonable intuitive arguments for why a feed-forward neural network can generate good similarity-preserving embeddings, the architecture and approach is far from novel. As far as I can tell, the model is nothing more than the most vanilla neural network trained with SGD on similarity signals.\n\nSlightly more original is the idea to use context embeddings to augment the expressive capacity of learned word representations. Of course, using explicit contextual information is not a new idea, especially for tasks like word sense disambiguation (see, e.g., 'Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space' by Neelakantan et al, which should also be cited), but the specific method used here is original, as far as I know.\n\nThe evaluation of the method is far from convincing. The corpora used to train the embeddings are far smaller than would ever be used in practice for unsupervised or semi-supervised embedding learning. The performance on the analogy task says little about the benefit of this method for larger corpora, and, as the authors mentioned in the comments, they expect \"the gain will be less significant, as the global context statistics brought in by the ConEc can also be picked up by word2vec with more training.\"\n\nThe argument can be made (and the authors do claim) that extrinsic evaluations are more important for real-world applications, so it is good to see experiments on NER. However, again the embeddings were trained on a very small corpus and I am not convinced that the observed benefit will persist when trained on larger corpora.\n\nOverall, I believe this paper offers little novelty and weak experimental evidence supporting its claims. I cannot recommend it for acceptance.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning similarity preserving representations with neural similarity and context encoders
["Franziska Horn", "Klaus-Robert M\u00fcller"]
We introduce similarity encoders (SimEc), which learn similarity preserving representations by using a feed-forward neural network to map data into an embedding space where the original similarities can be approximated linearly. The model can easily compute representations for novel (out-of-sample) data points, even if the original pairwise similarities of the training set were generated by an unknown process such as human ratings. This is demonstrated by creating embeddings of both image and text data. Furthermore, the idea behind similarity encoders gives an intuitive explanation of the optimization strategy used by the continuous bag-of-words (CBOW) word2vec model trained with negative sampling. Based on this insight, we define context encoders (ConEc), which can improve the word embeddings created with word2vec by using the local context of words to create out-of-vocabulary embeddings and representations for words with multiple meanings. The benefit of this is illustrated by using these word embeddings as features in the CoNLL 2003 named entity recognition task.
["Natural language processing", "Unsupervised Learning", "Supervised Learning"]
https://openreview.net/forum?id=SkBsEQYll
https://openreview.net/pdf?id=SkBsEQYll
https://openreview.net/forum?id=SkBsEQYll&noteId=ryTDQsW4g
Under review as a conference paper at ICLR 2017LEARNING SIMILARITY PRESERVING REPRESENTA -TIONS WITH NEURAL SIMILARITY AND CONTEXT EN -CODERSFranziska Horn & Klaus-Robert MüllerMachine Learning GroupTechnische Universität BerlinBerlin, Germanyfranziska.horn@campus.tu-berlin.deklaus-robert.mueller@tu-berlin.deABSTRACTWe introduce similarity encoders (SimEc), which learn similarity preserving repre-sentations by using a feed-forward neural network to map data into an embeddingspace where the original similarities can be approximated linearly. The model caneasily compute representations for novel (out-of-sample) data points, even if theoriginal pairwise similarities of the training set were generated by an unknownprocess such as human ratings. This is demonstrated by creating embeddingsof both image and text data. Furthermore, the idea behind similarity encodersgives an intuitive explanation of the optimization strategy used by the continuousbag-of-words (CBOW) word2vec model trained with negative sampling. Basedon this insight, we define context encoders (ConEc), which can improve the wordembeddings created with word2vec by using the local context of words to createout-of-vocabulary embeddings and representations for words with multiple mean-ings. The benefit of this is illustrated by using these word embeddings as featuresin the CoNLL 2003 named entity recognition task.1 I NTRODUCTIONMany dimensionality reduction or manifold learning algorithms optimize for retaining the pairwisesimilarities, distances, or local neighborhoods of data points. Classical scaling (Cox & Cox, 2000),kernel PCA (Schölkopf et al., 1998), isomap (Tenenbaum et al., 2000), and LLE (Roweis & Saul,2000) achieve this by performing an eigendecomposition of some similarity matrix to obtain a lowdimensional representation of the original data. However, this is computationally expensive if a lotof training examples are available. Additionally, out-of-sample representations can only be createdwhen the similarities to the original training examples can be computed (Bengio et al., 2004).For some methods such as t-SNE (van der Maaten & Hinton, 2008), great effort was put into extendingthe algorithm to work with large datasets (van der Maaten, 2013) or to provide an explicit mappingfunction which can be applied to new data points (van der Maaten, 2009). Current attempts at findinga more general solution to these issues are complex and require the development of specific costfunctions and constraints when used in place of existing algorithms (Bunte et al., 2012), which limitstheir applicability to new objectives.In this paper we introduce a new neural network architecture, that we will denote as similarityencoder (SimEc), which is able to learn representations that can retain arbitrary pairwise relationspresent in the input space, even those obtained from unknown similarity functions such as humanratings. A SimEc can learn a linear or non-linear mapping function to project new data points into alower dimensional embedding space. Furthermore, it can take advantage of large datasets since theobjective function is optimized iteratively using stochastic mini-batch gradient descent. We show onboth image and text datasets that SimEcs can, on the one hand, recreate solutions found by traditionalmethods such as kPCA or isomap, and, on the other hand, obtain meaningful embeddings fromsimilarities based on human labels.1Under review as a conference paper at ICLR 2017Additionally, we propose the new context encoder (ConEc) model, a variation of similarity encodersfor learning word embeddings, which extends word2vec (Mikolov et al., 2013b) by using the localcontext of words as input to the neural network to create representations for out-of-vocabularywords and to distinguish between multiple meanings of words. This is shown to be advantageous,for example, if the word embeddings are used as features in a named entity recognition task asdemonstrated on the CoNLL 2003 challenge.2 S IMILARITY ENCODERSWe propose a novel dimensionality reduction framework termed similarity encoder (SimEc), whichcan be used to learn a linear or non-linear mapping function for computing low dimensional represen-tations of data points such that the original pairwise similarities between the data points in the inputspace are preserved in the embedding space. For this, we borrow the “bottleneck” neural network(NN) architecture idea from autoencoders (Tishby et al., 2000; Hinton & Salakhutdinov, 2006). Au-toencoders aim to transform the high dimensional data points into low dimensional embeddings suchthat most of the data’s variance is retained. Their network architecture has two parts: The first part ofthe network maps the data points from the original feature space to the low dimensional embedding(at the bottleneck). The second part of the NN mirrors the first part and projects the embeddingback to a high dimensional output. This output is then compared to the original input to computethe reconstruction error of the training samples, which is used in the backpropagation procedure totune the network’s parameters. After the training is complete, i.e. the low dimensional embeddingsencode enough information about the original input samples to allow for their reconstruction, thesecond part of the network is discarded and only the first part is used to project data points into thelow dimensional embedding space. Similarity encoders have a similar two fold architecture, wherein the first part of the network, the data is mapped to a low dimensional embedding, and then inthe second part (which is again only used during training), the embedding is transformed such thatthe error of the representation can be computed. However, since here the objective is to retain the(non-linear) pairwise similarities instead of the data’s variance, the second part of the NN does notmirror the first like it does in the autoencoder architecture.InputEmbedding(bottleneck)OutputTargetFeed ForwardNNxi2RDyi2Rdsi2RNs02RNW12Rd⇥N,Figure 1: Similarity encoder (SimEc) architecture.The similarity encoder architecture (Figure 1) uses as the first part of the network a flexible non-linearfeed-forward neural network to map the high dimensional input data points xi2RDto a lowdimensional embedding yi2Rd(at the bottleneck). As we make no assumptions on the range ofvalues the embedding can take, the last layer of the first part of the NN (i.e. the one resulting inthe embedding) is always linear. For example, with two additional non-linear hidden layers, theembedding would be computed asyi=1(0(xiW0)W1)W2;where0and1denote your choice of non-linear activation functions (e.g. tanh, sigmoid, or relu),but there is no non-linearity applied after multiplying with W2. The second part of the network then2Under review as a conference paper at ICLR 2017consists of a single additional layer with the weight matrix W12RdNto project the embeddingto the output, the approximated similarities s02RN:s0=1(yiW1):These approximated similarities are then compared to the target similarities (for one data point this isthe corresponding row si2RNof the similarity matrix S2RNNof theNtraining samples) andthe computed error is used to tune the network’s parameters with backpropagation.For the model to learn most efficiently, the exact form of the cost function to optimize as well asthe type of non-linearity 1applied when computing the network’s output should be chosen withrespect to the type of target similarities that the model is supposed to preserve. In the experimentalsection of the paper we are considering two application scenarios of SimEcs: a) to obtain the samelow dimensional embedding as found by spectral methods such as kPCA, and b) to embed data pointssuch that binary similarity relations obtained from human labels are preserved.In the first case (further discussed in the next section), we omit the non-linearity when computingthe output of the network, i.e. s0=yiW1, since the target similarities, computed by some kernelfunction, are not necessarily constrained to lie in a specific interval. As the cost function to minimizewe choose the mean squared error between the output (approximated similarities) and the original(target) similarities. A regularization term is added to encourage the weights of the last layer ( W1)to be orthogonal.1The model’s objective function optimized during training is therefore:min1NNXi=1ksis0k22+1d2dW1W>1diag(W1W>1)1wherekkpdenotes the respective p-norms for vectors and matrices and is a hyperparameter tocontrol the strength of the regularization.In the second case, the target similarities are binary and it therefore makes sense to use a non-linear activation function in the final layer when computing the output of the network to ensure theapproximated similarities are between 0and1as well:2s0=1(yiW1)with1(z) =11 +e10(z0:5):While the mean squared error between the target and approximated similarities would still be a naturalchoice of cost function to optimize, with the additional non-linearity in the output layer, learningmight be slow due to small gradients and we therefore instead optimize the cross-entropy:min1NX[siln(s0) + (1si) ln(1s0)]:For a different application scenario, yet another setup might lead to the best results. When usingSimEcs in practice, we recommend to first try the first setup, i.e. keeping the output layer linear andminimizing the mean squared error, as this often already gives quite good results.After the training is completed, only the first part of the neural network, which maps the input to theembedding, is used to create the representations of new data points. Depending on the complexity ofthe feed-forward NN, the mapping function learned by similarity encoders can be linear or non-linear,and because of the iterative optimization using stochastic mini-batch gradient descent, large amountsof data can be utilized to learn optimal representations.32.1 R ELATION TO KERNEL PCAKernel PCA (kPCA) is a popular non-linear dimensionality reduction algorithm, which performs theeigendecomposition of a kernel matrix to obtain low dimensional representations of the data points1To get embeddings similar to those obtained by kPCA, orthogonal weights in the last layer of the NN helpas they correspond to the orthogonal eigenvectors of the kernel matrix found by kPCA.2This scaled and shifted sigmoid function maps values between 0 and 1 almost linearly while thresholdingvalues outside this interval.3To speed up the training procedure and limit memory requirements for large datasets, the columns of thesimilarity matrix can also be subsampled (yielding S2RNn), i.e. the number of target similarities (and thedimensionality of the output layer) is n < N , however all Ntraining examples can still be used as input to trainthe network.3Under review as a conference paper at ICLR 2017(Schölkopf et al., 1998). However, if the kernel matrix is very large this becomes computationallyvery expensive. Additionally, there are constraints on possible kernel functions (should be positivesemi-definite) and new data points can only be embedded in the lower dimensional space if theirkernel map (i.e. the similarities to the original training points) can be computed. As we show below,SimEc can optimize the same objective as kPCA but addresses these shortcomings.The general idea is that both kPCA and SimEc embed the Ndata points in a feature space where thegiven target similarities can be approximated linearly (i.e. with the scalar product of the embeddingvectors). When the error between the approximated ( S0) and the target similarities ( S) is computed asthe mean squared error, kPCA finds the optimal approximation by performing the eigendecompositionof the (centered) target similarity matrix, i.e.S0=YY>;whereY2RNdis the low dimensional embedding of the data based on the eigenvectors belongingto thedlargest eigenvalues of S.In addition to the embedding itself, it is often desired to have a parametrized mapping function,which can be used to project new (out-of-sample) data points into the embedding space. If the targetsimilarity matrix is the linear kernel, i.e. S=XX>whereX2RNDis the given input data,this can easily be accomplished with traditional PCA. Here, the covariance matrix of the centeredinput data, i.e. C=X>Xis decomposed to obtain a matrix with parameters, ~W2RDd, based onthe eigenvectors belonging to the dlargest eigenvalues of the covariance matrix. Then the optimalembedding (i.e. the same solution obtained by linear kPCA) can be computed asY=X~W:This serves as a mapping function, with which new data points can be easily projected into the lowerdimensional embedding space.When using a similarity encoder to embed data in a low dimensional space where the linear similaritiesare preserved, the SimEc’s architecture would consist of a neural network with a single linear layer,i.e. the parameter matrix W0, to project the input data Xto the embedding Y=XW 0, and anothermatrixW12RdNused to approximate the similarities asS0=YW1:From these formulas one can immediately see the link between linear similarity encoders and PCA /linear kPCA: once the parameters of the neural network are tuned correctly, W0would correspondto the mapping matrix ~Wfound by PCA and W1could be interpreted as Y>, i.e.Ywould be thesame eigenvector based embedding as found with linear kPCA.Finding the corresponding function to map new data points into the embedding space is trivial forlinear kPCA, but this is not the case for other kernel functions. While it is still possible to findthe optimal embedding with kPCA for non-linear kernel functions, the mapping function remainsunknown and new data points can only be projected into the embedding space if we can computetheir kernel map, i.e. the similarities to the original training examples (Bengio et al., 2004). Someattempts were made to manually define an explicit mapping function to represent data points inthe kernel feature space, however this only works for specific kernels and there exists no generalsolution (Rahimi & Recht, 2007). As neural networks are universal function approximators, with theright architecture similarity encoders could instead learn arbitrary mapping functions for unknownsimilarities to arrive at data driven kernel learning solutions.2.2 M ODEL OVERVIEWThe properties of similarity encoders are summarized in the following. The objective of this dimen-sionality reduction approach is to retain pairwise similarities between data points in the embeddingspace. This is achieved by tuning the parameters of a neural network to obtain a linear or non-linearmapping (depending on the network’s architecture) from the high dimensional input to the lowdimensional embedding. Since the cost function is optimized using stochastic mini-batch gradientdescent, we can take advantage of large datasets for training. The embedding for new test points canbe easily computed with the explicit mapping function in the form of the tuned neural network. Andsince there is no need to compute the similarity of new test examples to the original training data forout-of-sample solutions (like with kPCA), the target similarities can be generated by an unknownprocess such as human similarity judgments.4Under review as a conference paper at ICLR 20172.3 E XPERIMENTSIn the following experiments we demonstrate that similarity encoders can, on the one hand, reach thesame solution as kPCA, and, on the other hand, generate meaningful embeddings from human labels.To illustrate that this is independent of the type of data, we present results obtained both on the wellknown MNIST handwritten digits dataset as well as the 20 newsgroups text corpus. Further details aswell as the code to replicate these experiments and more is available online.4We compare the embedding found with linear kPCA to that created with a linear similarity encoder(consisting of one linear layer mapping the input to the embedding and a second linear layer to projectthe embedding to the output, i.e. computing the approximated similarities). Additionally, we showthat a non-linear SimEc can approximate the solution found with isomap (i.e. the eigendecompositionof the geodesic distance matrix). We found that for optimal results the kernel matrix used as the targetsimilarity matrix for the SimEc should first be centered (as it is being done for kPCA as well (Mülleret al., 2001)).In a second step, we show that SimEcs can learn the mapping to a low dimensional embedding forarbitrary similarity functions and reliably create representations for new test samples without the needto compute their similarities to the original training examples, thereby going beyond the capabilitiesof kPCA. For both datasets we illustrate this by using the class labels assigned to the samples byhuman annotators to create the target similarity matrix for the training fold of the data, i.e. Sis1fordata points belonging to the same class and 0everywhere else. We compare the solutions found bySimEc architectures with a varying number of additional non-linear hidden layers in the first partof the network (while keeping the embedding layer linear as before) to show how a more complexnetwork improves the ability to map the data into an embedding space in which the class-basedsimilarities are retained.MNIST The MNIST dataset contains 2828pixel images depicting handwritten digits. For ourexperiments we randomly subsampled 10k images from all classes, of which 80% are assigned to thetraining fold and the remaining 20% to the test fold (in the following plots, data points belonging tothe training set are displayed transparently while the test points are opaque). As shown in Figure 2,the embeddings of the MNIST dataset created with linear kPCA and a linear similarity encoder,which uses as target similarities the linear kernel matrix, are almost identical (up to a rotation). Thesame holds true for the isomap embedding, which is well approximated by a non-linear SimEc withtwo hidden layers using the geodesic distances between the data points as targets (Figure 8 in theAppendix). When optimizing SimEcs to retain the class-based similarities (Figure 3), additionalFigure 2: MNIST digits visualized in two dimensions by linear kPCA and a linear SimEc.non-linear hidden layers in the feed-forward NN can improve the embedding by further separatingdata points belonging to different classes in tight clusters. As it can be seen, the test points (opaque)are nicely mapped into the same locations as the corresponding training points (transparent), i.e.the model learns to associate the input pixels with the class clusters only based on the imposedsimilarities between the training data points.4https://github.com/cod3licious/simec/examples_simec.ipynb5Under review as a conference paper at ICLR 2017Figure 3: MNIST digits visualized in two dimensions by SimEcs with an increasing number ofnon-linear hidden layers and the objective to retain similarities based on class membership.20 newsgroups The 20 newsgroups dataset consists of around 18k newsgroup posts assigned to20 different topics. We take a subset of seven categories and use the original train/test split ( 4.1kand2.7k samples respectively) and remove metadata such as headers to avoid overfitting.5All textdocuments are transformed into 46k dimensional tf-idf feature vectors, which are used as input tothe SimEc and to compute the linear kernel matrix of the training fold. The embedding created withlinear kPCA is again well approximated by the solution found with a corresponding linear SimEc(Figure 9 in the Appendix). Additionally, this serves as an example where traditional PCA is not anoption to obtain the corresponding mapping matrix for the linear kPCA solution, as due to the highdimensionality of the input data and comparatively low number of samples, the empirical covariancematrix would be poorly estimated and too large to decompose into eigenvalues and -vectors. Withthe objective to retain the class-based similarities, a SimEc with a non-linear hidden layer clustersdocuments by their topics (Figure 4).3 C ONTEXT ENCODERSRepresentation learning is very prominent in the field of natural language processing (NLP). Forexample, word embeddings learned by neural network language models were shown to improve theperformance when used as features for supervised learning tasks such as named entity recognition(NER) (Collobert et al., 2011; Turian et al., 2010). The popular word2vec model (Figure 5) learnsmeaningful word embeddings by considering only the words’ local contexts and thanks to its shallowarchitecture it can be trained very efficiently on large corpora. However, an important limiting factorof current word embedding models is that they only learn the representations for words from a fixedvocabulary. This means, if in a task we encounter a new word which was not present in the texts usedfor training, we can not create an embedding for this word without repeating the time consuming5http://scikit-learn.org/stable/datasets/twenty_newsgroups.html6Under review as a conference paper at ICLR 2017Figure 4: 20 newsgroups texts visualized in two dimensions by a non-linear SimEc with one hiddenlayer and the objective to preserve the similarities based on class membership in the embedding.training procedure of the model.6Additionally, word2vec, like many other approaches, only learnsa single representation for every word. However, it is often the case that a single word can havemultiple meanings, e.g. “Washington” is both the name of a US state as well as a former president. Itis only the local context in which these words appear that lets humans resolve this ambiguity andidentify the proper sense of the word in question. While attempts were made to improve this, theylack flexibility as they require a clustering of word contexts beforehand (Huang et al., 2012), whichstill does not guarantee that all possible meanings of a word have been identified prior in the trainingdocuments. Other approaches require additional labels such part-of-speech tags (Trask et al., 2015)or other lexical resources like WordNet (Rothe & Schütze, 2015) to create word embeddings whichdistinguish between the different senses of a word.As a further contribution of this paper we provide a link between the successful word2vec naturallanguage model and similarity encoders and thereby propose a new model we call context encoder(ConEc), which can efficiently learn word embeddings from huge amounts of training data andadditionally make use of local contexts to create representations for out-of-vocabulary words andhelp distinguish between multiple meanings of words.target wordThe black cat slept on the bed. context wordsAfter trainingtarget embedding2R1⇥dTraining phaseW1W0W0N⇥dN⇥dl02R1⇥d1) take sum of context embeddings2) select target and k noise weights (negative sampling)N⇥dl12R(k+1)⇥d3) compute error & backpropagateerr =t(l0·lT1)(z)=11+ezwith:t: binary label vectorFigure 5: Continuous BOW word2vec model trained using negative sampling (Mikolov et al., 2013a;b;Goldberg & Levy, 2014).6In practice these models are trained on such a large vocabulary that it is rare to encounter a word whichdoes not have an embedding. However, there are still scenarios where this is the case, for example, it is unlikelythat the term “W10281545” is encountered in a regular training corpus, but we might still want its embedding torepresent a search query like “whirlpool W10281545 ice maker part”.7Under review as a conference paper at ICLR 2017Formally, word embeddings are d-dimensional vector representations learned for all Nwords inthe vocabulary. Word2vec is a shallow model with parameter matrices W0;W 12RNd, which aretuned iteratively by scanning huge amounts of texts sentence by sentence (see Figure 5). Based onsome context words the algorithm tries to predict the target word between them. Mathematicallythis is realized by first computing the sum of the embeddings of the context words by selecting theappropriate rows from W0. This vector is then multiplied by several rows selected from W1: one ofthese rows corresponds to the target word, while the others correspond to k‘noise’ words, selected atrandom (negative sampling). After applying a non-linear activation function, the backpropagationerror is computed by comparing this output to a label vector t2Rk+1, which is 1 at the position ofthe target word and 0 for all knoise words. After the training of the model is complete, the wordembedding for a target word is the corresponding row of W0.The main principle utilized when learning word embeddings is that similar words appear in similarcontexts (Harris, 1954; Melamud et al., 2015). Therefore, in theory one could compute the similaritiesbetween all words by checking how many context words any two words generally have in common(possibly weighted somehow to reduce the influence of frequent words such as ‘the’ and ‘and’).However, such a word similarity matrix would be very large, as typically the vocabulary for whichword embeddings are learned comprises several 10;000words, making it computationally tooexpensive to be used with similarity encoders. But this matrix would also be quite sparse, becausemany words in fact do not occur in similar contexts and most words only have a handful of synonymswhich could be used in their place. Therefore, we can view the negative sampling approach usedfor word2vec (Mikolov et al., 2013b) as an approximation of the words’ context based similarities:while the similarity of a word to itself is 1, if for one word we select krandom words out of the hugevocabulary, it is very unlikely that they are similar to the target word, i.e. we can approximate theirsimilarities with 0. This is the main insight necessary for adapting similarity encoders to be used forlearning (context sensitive) word embeddings.InputEmbeddingOutputTargetxi2RNyi2Rdsi2Rk+1s02Rk+1theblacksleptoncatFigure 6: Context encoder (ConEc) architecture. The input consists of a context vector, but instead ofcomparing the output to a full similarity vector, only the target word and knoise words are considered.Figure 6 shows the architecture of the context encoder. For the training procedure we stick veryclosely to the optimization strategy used by word2vec: while parsing a document, we again selecta target word and its context words. As input to the context encoder network, we use a vectorxiof lengthN(i.e. the size of the vocabulary), which indicates the context words by non-zerovalues (either binary or e.g. giving lower weight to context words further away from the target word).This vector is then multiplied by a first matrix of weights W02RNdyielding a low dimensionalembeddingyi, comparable to the summed context embedding created as a first step when trainingthe word2vec model. This embedding is then multiplied by a second matrix W12RdNto yieldthe output. Instead of comparing this output vector to a whole row from a word similarity matrix(as we would with similarity encoders), only k+ 1entries are selected, namely those belonging to8Under review as a conference paper at ICLR 2017the target word as well as krandom and unrelated noise words. After applying a non-linearity wecompare these entries s02Rk+1to the binary target vector exactly as in the word2vec model anduse error backpropagation to tune the parameters.Up to now, there are no real differences between the word2vec model and our context encoders, wehave merely provided an intuitive interpretation of the training procedure and objective. The maindeviation from the word2vec model lies in the computation of the word embedding for a target wordafter the training is complete. In the case of word2vec, the word embedding is simply the row ofthe tunedW0matrix. However, when considering the idea behind the optimization procedure, weinstead propose to compute a target word’s representation by multiplying W0with the word’s averagecontext vector. This is closer to what is being done in the training procedure and additionally itenables us to compute the embeddings for out-of-vocabulary words (assuming at least most of sucha new word’s context words are in the vocabulary) as well as to place more emphasis on a word’slocal context (which helps to identify the proper meaning of the word (Melamud et al., 2015)) bycreating a weighted sum between the word’s average global and local context vectors used as input tothe ConEc.With this new perspective on the model and optimization procedure, another advancement is feasible.Since the context words are merely a sparse feature vector used as input to a neural network, thereis no reason why this input vector should not contain other features about the target word as well.For example, the feature vector could be extended to contain information about the word’s case,part-of-speech (POS) tag, or other relevant details. While this would increase the dimensionalityof the first weight matrix W0to include the additional features when mapping the input to theword’s embedding, the training objective and therefore also W1would remain unchanged. Theseadditional features could be especially helpful if details about the words would otherwise get lost inpreprocessing (e.g. by lowercasing) or to retain information about a word’s position in the sentence,which is ignored in a BOW approach. These extended ConEcs are expected to create embeddingswhich distinguish even better between the words’ different senses by taking into account, for example,if the word is used as a noun or verb in the current context, similar to the sense2vec algorithm (Trasket al., 2015). However, unlike sense2vec, not multiple embeddings per term are learned, instead thedimensionality of the input vector is increased to include the POS tag of the current word as a feature.3.1 E XPERIMENTSThe word embeddings learned with word2vec and context encoders are evaluated on a word analogytask (Mikolov et al., 2013a) as well as the CoNLL 2003 NER benchmark task (Tjong et al., 2003).The word2vec model used is a continuous BOW model trained with negative sampling as describedabove where k= 13 , the embedding dimensionality dis200and we use a context window of 5. Theword embeddings created by the context encoders are build directly on top of the word2vec model bymultiplying the original embeddings ( W0) with the respective context vectors. Code to replicate theexperiments can be found online.7The results of the analogy task can be found in the Appendix.8Named Entity Recognition The main advantage of context encoders is that they can use localcontext to create out-of-vocabulary (OOV) embeddings and distinguish between the different sensesof words. The effects of this are most prominent in a task such as named entity recognition (NER)where the local context of a word can make all the difference, e.g. to distinguish between the“Chicago Bears” (an organization) and the city of Chicago (a location). To test this, we used theword embeddings as features in the CoNLL 2003 NER benchmark task (Tjong et al., 2003). Theword2vec embeddings were trained on the documents used in the training part of the task.9For thecontext encoders we experimented with different combinations of local and global context vectors.The global context vectors were computed on only the training documents as well, i.e. just as with7https://github.com/cod3licious/conec8As it was recently demonstrated that a good performance on intrinsic evaluation tasks such as word similarityor analogy tasks does not necessarily transfer to extrinsic evaluation measures when using the word embeddingsas features (Chiu et al., 2016; Linzen, 2016), we consider the performance on the NER challenge as morerelevant.9Since this is a very small corpus, we trained word2vec for 25 iterations on these documents (afterwards theperformance on the development split stopped improving significantly) while usually the model is trained in asingle pass through a much larger corpus.9Under review as a conference paper at ICLR 2017the word2vec model, when applied to the test documents there are some words which don’t have aword embedding available as they did not occur in the training texts. The local context vectors on theother hand can be computed for all words occurring in the current document for which the modelshould identify the named entities. When combining these local context vectors with the global oneswe always use the local context vector as is in case there is no global vector available and otherwisecompute a weighted average between the two context vectors as wlCV local+ (1wl)CV global.10The different word embeddings were used as features with a logistic regression classifier trained onthe labels obtained from the training part of the task and the reported F1-scores were computed usingthe official evaluation script. Please note that we are using this task to show the potential of ConEcword embeddings as features in a real world task and to illustrate their advantages over the regularword2vec embeddings and did not optimize for competitive performance on this NER challenge.global wl=0.wl=0.1wl=0.2wl=0.3wl=0.4wl=0.5wl=0.6wl=0.7wl=0.8wl=0.9 wl=1.202530354045F1-Score [%]NER performance with different word embedding featurestraindevtestABCFigure 7: Results of the CoNLL 2003 NER task based on three random initializations of the word2vecmodel. The overall results are shown on the left, where the mean performance using word2vecembeddings is considered as our baseline indicated by the dashed lines, all other embeddings arecomputed with context encoders using various combinations of the words’ global and local contextvectors. On the right, the increased performance (mean and std) on the test fold achieved by usingConEc is highlighted: Enhancing the word2vec embeddings with global context information yields aperformance gain of 2:5percentage points (A). By additionally using local context vectors to createOOV word embeddings ( wl= 0) we gain another 1:7points (B). When using a combination ofglobal and local context vectors ( wl= 0:4) to distinguish between the different meanings of words,the F1-score increases by another 5:1points (C), yielding a F1-score of 39:92%, which marks asignificant improvement compared to the 30:59% reached with word2vec features.Figure 7 shows the results achieved with various word embeddings on the training, developmentand test part of the CoNLL task. As it can be seen there, taking into account the local context canyield large improvements, especially on the dev and test data. Context encoders using only the globalcontext vectors already perform better than word2vec. When using the local context vectors onlywhere the global ones are not available ( wl= 0) we can see a jump in the development and testperformance, while of course the training performance stays the same as here we have global contextvectors for all words. The best performances on all folds are achieved when averaging the global andlocal context vectors with around wl= 0:4before multiplying them with the word2vec embeddings.This clearly shows that using ConEcs with local context vectors can be very beneficial as they let uscompute word embeddings for out-of-vocabulary words as well as help distinguish between multiplemeanings of words.10The global context matrix is computed without taking the word itself into account (i.e. zero on the diagonal)to make the context vectors comparable to the local context vectors of OOV words where we can’t count thetarget word either. Both global and local context vectors are normalized by their respective maximum values,then multiplied with the length normalized word2vec embeddings and again renormalized to have unit length.10Under review as a conference paper at ICLR 20174 C ONCLUSIONRepresenting intrinsically complex data is an ubiquitous challenge in data analysis. While kernelmethods and manifold learning have made very successful contributions, their ability to scale issomewhat limited. Neural autoencoders offer scalable nonlinear embeddings, but their objective is tominimize the reconstruction error of the input data which does not necessarily preserve importantpairwise relations between data points. In this paper we have proposed SimEcs as a neural networkframework which bridges this gap by optimizing the same objective as spectral methods, such askPCA, for creating similarity preserving embeddings while retaining the favorable properties ofautoencoders.Similarity encoders are a novel method to learn similarity preserving embeddings and can be especiallyuseful when it is computationally infeasible to perform the eigendecomposition of a kernel matrix,when the target similarities are obtained through an unknown process such as human similarityjudgments, or when an explicit mapping function is required. To accomplish this, a feed-forwardneural network is constructed to map the data into an embedding space where the original similaritiescan be approximated linearly.As a second contribution we have defined context encoders, a practical extension of SimEcs, that canbe readily used to enhance the word2vec model with further local context information and global wordstatistics. Most importantly, ConEcs allow to easily create word embeddings for out-of-vocabularywords on the spot and distinguish between different meanings of a word based its local context.Finally, we have demonstrated the usefulness of SimEcs and ConEcs for practical tasks such as thevisualization of data from different domains and to create meaningful word embedding features for aNER task, going beyond the capabilities of traditional methods.Future work will aim to further the theoretical understanding of SimEcs and ConEcs and exploreother application scenarios where using this novel neural network architecture can be beneficial. Asit is often the case with neural network models, determining the optimal architecture as well as otherhyperparameter choices best suited for the task at hand can be difficult. While so far we mainlystudied SimEcs based on fairly simple feed-forward networks, it appears promising to consideralso deeper neural networks and possibly even more elaborate architectures, such as convolutionalnetworks, for the initial mapping step to the embedding space, as in this manner hierarchical structuresin complex data could be reflected. Note furthermore that prior knowledge as well as more generalerror functions could be employed to tailor the embedding to the desired application target(s).ACKNOWLEDGMENTSWe would like to thank Antje Relitz, Christoph Hartmann, Ivana Balaževi ́c, and other anonymousreviewers for their helpful comments on earlier versions of this manuscript. Additionally, FranziskaHorn acknowledges funding from the Elsa-Neumann scholarship from the TU Berlin.REFERENCESYoshua Bengio, Jean-François Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Le Roux, and Marie Ouimet.Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. Advances in neuralinformation processing systems , 16:177–184, 2004.Kerstin Bunte, Michael Biehl, and Barbara Hammer. A general framework for dimensionality-reducing datavisualization mapping. Neural Computation , 24(3):771–804, 2012.Billy Chiu, Anna Korhonen, and Sampo Pyysalo. Intrinsic evaluation of word vectors fails to predict extrinsicperformance. ACL 2016 , pp. 1, 2016.Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Naturallanguage processing (almost) from scratch. The Journal of Machine Learning Research , 12:2493–2537, 2011.Trevor F Cox and Michael AA Cox. Multidimensional scaling . CRC Press, 2000.Yoav Goldberg and Omer Levy. word2vec explained: Deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722 , 2014.Zellig S Harris. Distributional structure. Word , 10(2-3):146–162, 1954.11Under review as a conference paper at ICLR 2017Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.Science , 313(5786):504–507, 2006.Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations viaglobal context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Associationfor Computational Linguistics: Long Papers-Volume 1 , pp. 873–882. ACL, 2012.Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from wordembeddings. Transactions of the Association for Computational Linguistics , 3:211–225, 2015.Tal Linzen. Issues in evaluating semantic spaces using word analogies. arXiv preprint arXiv:1606.07736 , 2016.Oren Melamud, Ido Dagan, and Jacob Goldberger. Modeling word meaning in context with substitute vectors.InHuman Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL ,2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations invector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of wordsand phrases and their compositionality. In Advances in neural information processing systems , pp. 3111–3119,2013b.Klaus-Robert Müller, Sebastian Mika, Gunnar Rätsch, Koji Tsuda, and Bernhard Schölkopf. An introduction tokernel-based learning algorithms. Neural Networks, IEEE Transactions on , 12(2):181–201, 2001.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for Word Representa-tion. In EMNLP , volume 14, pp. 1532–1543, 2014.Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neuralinformation processing systems , pp. 1177–1184, 2007.Sascha Rothe and Hinrich Schütze. Autoextend: Extending word embeddings to embeddings for synsets andlexemes. arXiv preprint arXiv:1507.01127 , 2015.Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science ,290(5500):2323–2326, 2000.Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kerneleigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlineardimensionality reduction. Science , 290(5500):2319–2323, 2000.Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprintphysics/0004057 , 2000.EF Tjong, Kim Sang, and F De Meulder. Introduction to the CoNLL-2003 Shared Task: Language-IndependentNamed Entity Recognition. In Walter Daelemans and Miles Osborne (eds.), Proceedings of CoNLL-2003 , pp.142–147. Edmonton, Canada, 2003.Andrew Trask, Phil Michalak, and John Liu. sense2vec-a fast and accurate method for word sense disambiguationin neural word embeddings. arXiv preprint arXiv:1511.06388 , 2015.Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method forsemi-supervised learning. In Proceedings of the 48th annual meeting of the association for computationallinguistics , pp. 384–394. Association for Computational Linguistics, 2010.Laurens van der Maaten. Learning a parametric embedding by preserving local structure. In InternationalConference on Artificial Intelligence and Statistics , pp. 384–391, 2009.Laurens van der Maaten. Barnes-Hut-SNE. In Proceedings of the International Conference on LearningRepresentations , 2013.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine LearningResearch , 9(2579-2605):85, 2008.12Under review as a conference paper at ICLR 2017APPENDIXFigure 8: MNIST digits visualized in two dimensions by isomap and a non-linear SimEc.Figure 9: 20 newsgroups dataset embedded with linear kernel PCA and a corresponding linear SimEc.Analogy task To show that the word embeddings created with context encoders capture meaningfulsemantic and syntactic relationships between words, we evaluated them on the original analogy taskpublished together with the word2vec model (Mikolov et al., 2013a).11This task consists of manyquestions in the form of “ man is to king aswoman is to XXX” where the model is supposed to findthe correct answer queen . This is accomplished by taking the word embedding for king, subtractingfrom it the embedding for man and then adding the embedding for woman . This new word vectorshould then be most similar (with respect to the cosine similarity) to the embedding for queen .12The word2vec and corresponding context encoder model are trained for ten iterations on the text8corpus,13which contains around 17 million words and a vocabulary of about 70k unique words, andthe training part of the 1-billion benchmark dataset,14which contains over 768 million wordswith a vocabulary of 486k unique words.15The results of the analogy task are shown in Table 1. To capture some of the semantic relationsbetween words (e.g. the first four task categories) it can be advantageous to use context encoders, i.e.to weight the word2vec embeddings with the words’ average context vectors - however to achieve thebest results we also had to include the target word itself in these context vectors. One reason for theConEcs’ superior performance on some of the task categories but not others might be that the cityand country names compared in the first four task categories only have a single sense (referring to the11See also https://code.google.com/archive/p/word2vec/ .12Readers familiar with Levy et al. (2015) will recognize this as the 3CosAdd method. We have tried 3CosMulas well, but found that the results did not improve significantly and therefore omitted them here.13http://mattmahoney.net/dc/text8.zip14http://code.google.com/p/1-billion-word-language-modeling-benchmark/15In this experiment we ignore all words which occur less than 5 times in the training corpus.13Under review as a conference paper at ICLR 2017Table 1: Accuracy on the analogy task with mean and standard deviation computed using threerandom seeds when initializing the word2vec model. The best results for each category and corpusare in bold.text8 (10 iter) 1-billionword2vec Context Encoder word2vec ConEccapital-common-countries 63.8 4.7 78.70.2 79.32.2 83.11.2capital-world 34.0 2.1 54.71.3 63.81.4 75.90.4currency 15.4 0.9 19.30.6 13.33.6 14.80.8city-in-state 28.6 1.0 43.60.9 19.61.7 29.61.0family 79.61.5 77.20.4 78.72.2 79.01.4gram1-adjective-to-adverb 11.0 0.9 16.60.7 12.30.5 13.31.1gram2-opposite 24.3 3.0 24.32.0 27.60.1 21.31.1gram3-comparative 64.30.5 63.01.1 83.70.9 76.21.1gram4-superlative 40.32.1 37.61.5 69.40.5 56.21.2gram5-present-participle 30.5 1.0 31.70.4 78.41.0 68.00.7gram6-nationality-adjective 70.61.5 67.21.4 83.80.6 83.80.5gram7-past-tense 30.5 1.8 33.00.6 53.90.9 49.20.7gram8-plural 49.80.3 49.21.2 62.71.9 56.71.0gram9-plural-verbs 41.02.5 30.11.9 68.70.2 45.00.4total 42.1 0.6 46.50.1 57.20.3 55.80.3respective location), while the words asked for in other task categories can have multiple meanings,for example “run” is used as both a verb and a noun and in some contexts refers to the sport activitywhile other times it is used in a more abstract sense, e.g. in the context of someone running forpresident. Therefore, the results in the other task categories might improve if the words’ contextvectors are first clustered and then the ConEc embedding is generated by multiplying with the averageof only those context vectors corresponding to the word sense most appropriate for the task category.14
SyOc8pS4e
SkBsEQYll
ICLR.cc/2017/conference/-/paper85/official/review
{"title": "Novelty claim is false, evaluation is partial", "rating": "3: Clear rejection", "review": "This paper presents a method for embedding data instances into a low-dimensional space that preserves some form of similarity.\n\nAlthough the paper presents this notion as new, basically every pre-trained embedding (be it auto-encoders or word2vec) has been doing the same: representing items in a low-dimensional space that inherently encodes their similarities. Even when looking at the specific case of word/context embeddings, the method is not novel either: this method is almost identical to one of the similarity functions presented in \"A Simple Word Embedding Model for Lexical Substitution\" (Melamud et al., 2015). The novelty claim must be more accurate and position itself with respect to existing work.\n\nIn addition, I think the evaluation could be done better. There are plenty of benchmarks for word embeddings in context, for example: \n* http://veceval.com/ (Nayak et al., RepEval 2016).\n* Lexical Substitution in Context\nAnd many higher-level tasks where word similarity in context could be a game-changer:\n* Semantic Text Similarity\n* Recognizing Textual Entailment / Natural Language Inference\nI was disappointed that none of these were even brought up.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning similarity preserving representations with neural similarity and context encoders
["Franziska Horn", "Klaus-Robert M\u00fcller"]
We introduce similarity encoders (SimEc), which learn similarity preserving representations by using a feed-forward neural network to map data into an embedding space where the original similarities can be approximated linearly. The model can easily compute representations for novel (out-of-sample) data points, even if the original pairwise similarities of the training set were generated by an unknown process such as human ratings. This is demonstrated by creating embeddings of both image and text data. Furthermore, the idea behind similarity encoders gives an intuitive explanation of the optimization strategy used by the continuous bag-of-words (CBOW) word2vec model trained with negative sampling. Based on this insight, we define context encoders (ConEc), which can improve the word embeddings created with word2vec by using the local context of words to create out-of-vocabulary embeddings and representations for words with multiple meanings. The benefit of this is illustrated by using these word embeddings as features in the CoNLL 2003 named entity recognition task.
["Natural language processing", "Unsupervised Learning", "Supervised Learning"]
https://openreview.net/forum?id=SkBsEQYll
https://openreview.net/pdf?id=SkBsEQYll
https://openreview.net/forum?id=SkBsEQYll&noteId=SyOc8pS4e
Under review as a conference paper at ICLR 2017LEARNING SIMILARITY PRESERVING REPRESENTA -TIONS WITH NEURAL SIMILARITY AND CONTEXT EN -CODERSFranziska Horn & Klaus-Robert MüllerMachine Learning GroupTechnische Universität BerlinBerlin, Germanyfranziska.horn@campus.tu-berlin.deklaus-robert.mueller@tu-berlin.deABSTRACTWe introduce similarity encoders (SimEc), which learn similarity preserving repre-sentations by using a feed-forward neural network to map data into an embeddingspace where the original similarities can be approximated linearly. The model caneasily compute representations for novel (out-of-sample) data points, even if theoriginal pairwise similarities of the training set were generated by an unknownprocess such as human ratings. This is demonstrated by creating embeddingsof both image and text data. Furthermore, the idea behind similarity encodersgives an intuitive explanation of the optimization strategy used by the continuousbag-of-words (CBOW) word2vec model trained with negative sampling. Basedon this insight, we define context encoders (ConEc), which can improve the wordembeddings created with word2vec by using the local context of words to createout-of-vocabulary embeddings and representations for words with multiple mean-ings. The benefit of this is illustrated by using these word embeddings as featuresin the CoNLL 2003 named entity recognition task.1 I NTRODUCTIONMany dimensionality reduction or manifold learning algorithms optimize for retaining the pairwisesimilarities, distances, or local neighborhoods of data points. Classical scaling (Cox & Cox, 2000),kernel PCA (Schölkopf et al., 1998), isomap (Tenenbaum et al., 2000), and LLE (Roweis & Saul,2000) achieve this by performing an eigendecomposition of some similarity matrix to obtain a lowdimensional representation of the original data. However, this is computationally expensive if a lotof training examples are available. Additionally, out-of-sample representations can only be createdwhen the similarities to the original training examples can be computed (Bengio et al., 2004).For some methods such as t-SNE (van der Maaten & Hinton, 2008), great effort was put into extendingthe algorithm to work with large datasets (van der Maaten, 2013) or to provide an explicit mappingfunction which can be applied to new data points (van der Maaten, 2009). Current attempts at findinga more general solution to these issues are complex and require the development of specific costfunctions and constraints when used in place of existing algorithms (Bunte et al., 2012), which limitstheir applicability to new objectives.In this paper we introduce a new neural network architecture, that we will denote as similarityencoder (SimEc), which is able to learn representations that can retain arbitrary pairwise relationspresent in the input space, even those obtained from unknown similarity functions such as humanratings. A SimEc can learn a linear or non-linear mapping function to project new data points into alower dimensional embedding space. Furthermore, it can take advantage of large datasets since theobjective function is optimized iteratively using stochastic mini-batch gradient descent. We show onboth image and text datasets that SimEcs can, on the one hand, recreate solutions found by traditionalmethods such as kPCA or isomap, and, on the other hand, obtain meaningful embeddings fromsimilarities based on human labels.1Under review as a conference paper at ICLR 2017Additionally, we propose the new context encoder (ConEc) model, a variation of similarity encodersfor learning word embeddings, which extends word2vec (Mikolov et al., 2013b) by using the localcontext of words as input to the neural network to create representations for out-of-vocabularywords and to distinguish between multiple meanings of words. This is shown to be advantageous,for example, if the word embeddings are used as features in a named entity recognition task asdemonstrated on the CoNLL 2003 challenge.2 S IMILARITY ENCODERSWe propose a novel dimensionality reduction framework termed similarity encoder (SimEc), whichcan be used to learn a linear or non-linear mapping function for computing low dimensional represen-tations of data points such that the original pairwise similarities between the data points in the inputspace are preserved in the embedding space. For this, we borrow the “bottleneck” neural network(NN) architecture idea from autoencoders (Tishby et al., 2000; Hinton & Salakhutdinov, 2006). Au-toencoders aim to transform the high dimensional data points into low dimensional embeddings suchthat most of the data’s variance is retained. Their network architecture has two parts: The first part ofthe network maps the data points from the original feature space to the low dimensional embedding(at the bottleneck). The second part of the NN mirrors the first part and projects the embeddingback to a high dimensional output. This output is then compared to the original input to computethe reconstruction error of the training samples, which is used in the backpropagation procedure totune the network’s parameters. After the training is complete, i.e. the low dimensional embeddingsencode enough information about the original input samples to allow for their reconstruction, thesecond part of the network is discarded and only the first part is used to project data points into thelow dimensional embedding space. Similarity encoders have a similar two fold architecture, wherein the first part of the network, the data is mapped to a low dimensional embedding, and then inthe second part (which is again only used during training), the embedding is transformed such thatthe error of the representation can be computed. However, since here the objective is to retain the(non-linear) pairwise similarities instead of the data’s variance, the second part of the NN does notmirror the first like it does in the autoencoder architecture.InputEmbedding(bottleneck)OutputTargetFeed ForwardNNxi2RDyi2Rdsi2RNs02RNW12Rd⇥N,Figure 1: Similarity encoder (SimEc) architecture.The similarity encoder architecture (Figure 1) uses as the first part of the network a flexible non-linearfeed-forward neural network to map the high dimensional input data points xi2RDto a lowdimensional embedding yi2Rd(at the bottleneck). As we make no assumptions on the range ofvalues the embedding can take, the last layer of the first part of the NN (i.e. the one resulting inthe embedding) is always linear. For example, with two additional non-linear hidden layers, theembedding would be computed asyi=1(0(xiW0)W1)W2;where0and1denote your choice of non-linear activation functions (e.g. tanh, sigmoid, or relu),but there is no non-linearity applied after multiplying with W2. The second part of the network then2Under review as a conference paper at ICLR 2017consists of a single additional layer with the weight matrix W12RdNto project the embeddingto the output, the approximated similarities s02RN:s0=1(yiW1):These approximated similarities are then compared to the target similarities (for one data point this isthe corresponding row si2RNof the similarity matrix S2RNNof theNtraining samples) andthe computed error is used to tune the network’s parameters with backpropagation.For the model to learn most efficiently, the exact form of the cost function to optimize as well asthe type of non-linearity 1applied when computing the network’s output should be chosen withrespect to the type of target similarities that the model is supposed to preserve. In the experimentalsection of the paper we are considering two application scenarios of SimEcs: a) to obtain the samelow dimensional embedding as found by spectral methods such as kPCA, and b) to embed data pointssuch that binary similarity relations obtained from human labels are preserved.In the first case (further discussed in the next section), we omit the non-linearity when computingthe output of the network, i.e. s0=yiW1, since the target similarities, computed by some kernelfunction, are not necessarily constrained to lie in a specific interval. As the cost function to minimizewe choose the mean squared error between the output (approximated similarities) and the original(target) similarities. A regularization term is added to encourage the weights of the last layer ( W1)to be orthogonal.1The model’s objective function optimized during training is therefore:min1NNXi=1ksis0k22+1d2dW1W>1diag(W1W>1)1wherekkpdenotes the respective p-norms for vectors and matrices and is a hyperparameter tocontrol the strength of the regularization.In the second case, the target similarities are binary and it therefore makes sense to use a non-linear activation function in the final layer when computing the output of the network to ensure theapproximated similarities are between 0and1as well:2s0=1(yiW1)with1(z) =11 +e10(z0:5):While the mean squared error between the target and approximated similarities would still be a naturalchoice of cost function to optimize, with the additional non-linearity in the output layer, learningmight be slow due to small gradients and we therefore instead optimize the cross-entropy:min1NX[siln(s0) + (1si) ln(1s0)]:For a different application scenario, yet another setup might lead to the best results. When usingSimEcs in practice, we recommend to first try the first setup, i.e. keeping the output layer linear andminimizing the mean squared error, as this often already gives quite good results.After the training is completed, only the first part of the neural network, which maps the input to theembedding, is used to create the representations of new data points. Depending on the complexity ofthe feed-forward NN, the mapping function learned by similarity encoders can be linear or non-linear,and because of the iterative optimization using stochastic mini-batch gradient descent, large amountsof data can be utilized to learn optimal representations.32.1 R ELATION TO KERNEL PCAKernel PCA (kPCA) is a popular non-linear dimensionality reduction algorithm, which performs theeigendecomposition of a kernel matrix to obtain low dimensional representations of the data points1To get embeddings similar to those obtained by kPCA, orthogonal weights in the last layer of the NN helpas they correspond to the orthogonal eigenvectors of the kernel matrix found by kPCA.2This scaled and shifted sigmoid function maps values between 0 and 1 almost linearly while thresholdingvalues outside this interval.3To speed up the training procedure and limit memory requirements for large datasets, the columns of thesimilarity matrix can also be subsampled (yielding S2RNn), i.e. the number of target similarities (and thedimensionality of the output layer) is n < N , however all Ntraining examples can still be used as input to trainthe network.3Under review as a conference paper at ICLR 2017(Schölkopf et al., 1998). However, if the kernel matrix is very large this becomes computationallyvery expensive. Additionally, there are constraints on possible kernel functions (should be positivesemi-definite) and new data points can only be embedded in the lower dimensional space if theirkernel map (i.e. the similarities to the original training points) can be computed. As we show below,SimEc can optimize the same objective as kPCA but addresses these shortcomings.The general idea is that both kPCA and SimEc embed the Ndata points in a feature space where thegiven target similarities can be approximated linearly (i.e. with the scalar product of the embeddingvectors). When the error between the approximated ( S0) and the target similarities ( S) is computed asthe mean squared error, kPCA finds the optimal approximation by performing the eigendecompositionof the (centered) target similarity matrix, i.e.S0=YY>;whereY2RNdis the low dimensional embedding of the data based on the eigenvectors belongingto thedlargest eigenvalues of S.In addition to the embedding itself, it is often desired to have a parametrized mapping function,which can be used to project new (out-of-sample) data points into the embedding space. If the targetsimilarity matrix is the linear kernel, i.e. S=XX>whereX2RNDis the given input data,this can easily be accomplished with traditional PCA. Here, the covariance matrix of the centeredinput data, i.e. C=X>Xis decomposed to obtain a matrix with parameters, ~W2RDd, based onthe eigenvectors belonging to the dlargest eigenvalues of the covariance matrix. Then the optimalembedding (i.e. the same solution obtained by linear kPCA) can be computed asY=X~W:This serves as a mapping function, with which new data points can be easily projected into the lowerdimensional embedding space.When using a similarity encoder to embed data in a low dimensional space where the linear similaritiesare preserved, the SimEc’s architecture would consist of a neural network with a single linear layer,i.e. the parameter matrix W0, to project the input data Xto the embedding Y=XW 0, and anothermatrixW12RdNused to approximate the similarities asS0=YW1:From these formulas one can immediately see the link between linear similarity encoders and PCA /linear kPCA: once the parameters of the neural network are tuned correctly, W0would correspondto the mapping matrix ~Wfound by PCA and W1could be interpreted as Y>, i.e.Ywould be thesame eigenvector based embedding as found with linear kPCA.Finding the corresponding function to map new data points into the embedding space is trivial forlinear kPCA, but this is not the case for other kernel functions. While it is still possible to findthe optimal embedding with kPCA for non-linear kernel functions, the mapping function remainsunknown and new data points can only be projected into the embedding space if we can computetheir kernel map, i.e. the similarities to the original training examples (Bengio et al., 2004). Someattempts were made to manually define an explicit mapping function to represent data points inthe kernel feature space, however this only works for specific kernels and there exists no generalsolution (Rahimi & Recht, 2007). As neural networks are universal function approximators, with theright architecture similarity encoders could instead learn arbitrary mapping functions for unknownsimilarities to arrive at data driven kernel learning solutions.2.2 M ODEL OVERVIEWThe properties of similarity encoders are summarized in the following. The objective of this dimen-sionality reduction approach is to retain pairwise similarities between data points in the embeddingspace. This is achieved by tuning the parameters of a neural network to obtain a linear or non-linearmapping (depending on the network’s architecture) from the high dimensional input to the lowdimensional embedding. Since the cost function is optimized using stochastic mini-batch gradientdescent, we can take advantage of large datasets for training. The embedding for new test points canbe easily computed with the explicit mapping function in the form of the tuned neural network. Andsince there is no need to compute the similarity of new test examples to the original training data forout-of-sample solutions (like with kPCA), the target similarities can be generated by an unknownprocess such as human similarity judgments.4Under review as a conference paper at ICLR 20172.3 E XPERIMENTSIn the following experiments we demonstrate that similarity encoders can, on the one hand, reach thesame solution as kPCA, and, on the other hand, generate meaningful embeddings from human labels.To illustrate that this is independent of the type of data, we present results obtained both on the wellknown MNIST handwritten digits dataset as well as the 20 newsgroups text corpus. Further details aswell as the code to replicate these experiments and more is available online.4We compare the embedding found with linear kPCA to that created with a linear similarity encoder(consisting of one linear layer mapping the input to the embedding and a second linear layer to projectthe embedding to the output, i.e. computing the approximated similarities). Additionally, we showthat a non-linear SimEc can approximate the solution found with isomap (i.e. the eigendecompositionof the geodesic distance matrix). We found that for optimal results the kernel matrix used as the targetsimilarity matrix for the SimEc should first be centered (as it is being done for kPCA as well (Mülleret al., 2001)).In a second step, we show that SimEcs can learn the mapping to a low dimensional embedding forarbitrary similarity functions and reliably create representations for new test samples without the needto compute their similarities to the original training examples, thereby going beyond the capabilitiesof kPCA. For both datasets we illustrate this by using the class labels assigned to the samples byhuman annotators to create the target similarity matrix for the training fold of the data, i.e. Sis1fordata points belonging to the same class and 0everywhere else. We compare the solutions found bySimEc architectures with a varying number of additional non-linear hidden layers in the first partof the network (while keeping the embedding layer linear as before) to show how a more complexnetwork improves the ability to map the data into an embedding space in which the class-basedsimilarities are retained.MNIST The MNIST dataset contains 2828pixel images depicting handwritten digits. For ourexperiments we randomly subsampled 10k images from all classes, of which 80% are assigned to thetraining fold and the remaining 20% to the test fold (in the following plots, data points belonging tothe training set are displayed transparently while the test points are opaque). As shown in Figure 2,the embeddings of the MNIST dataset created with linear kPCA and a linear similarity encoder,which uses as target similarities the linear kernel matrix, are almost identical (up to a rotation). Thesame holds true for the isomap embedding, which is well approximated by a non-linear SimEc withtwo hidden layers using the geodesic distances between the data points as targets (Figure 8 in theAppendix). When optimizing SimEcs to retain the class-based similarities (Figure 3), additionalFigure 2: MNIST digits visualized in two dimensions by linear kPCA and a linear SimEc.non-linear hidden layers in the feed-forward NN can improve the embedding by further separatingdata points belonging to different classes in tight clusters. As it can be seen, the test points (opaque)are nicely mapped into the same locations as the corresponding training points (transparent), i.e.the model learns to associate the input pixels with the class clusters only based on the imposedsimilarities between the training data points.4https://github.com/cod3licious/simec/examples_simec.ipynb5Under review as a conference paper at ICLR 2017Figure 3: MNIST digits visualized in two dimensions by SimEcs with an increasing number ofnon-linear hidden layers and the objective to retain similarities based on class membership.20 newsgroups The 20 newsgroups dataset consists of around 18k newsgroup posts assigned to20 different topics. We take a subset of seven categories and use the original train/test split ( 4.1kand2.7k samples respectively) and remove metadata such as headers to avoid overfitting.5All textdocuments are transformed into 46k dimensional tf-idf feature vectors, which are used as input tothe SimEc and to compute the linear kernel matrix of the training fold. The embedding created withlinear kPCA is again well approximated by the solution found with a corresponding linear SimEc(Figure 9 in the Appendix). Additionally, this serves as an example where traditional PCA is not anoption to obtain the corresponding mapping matrix for the linear kPCA solution, as due to the highdimensionality of the input data and comparatively low number of samples, the empirical covariancematrix would be poorly estimated and too large to decompose into eigenvalues and -vectors. Withthe objective to retain the class-based similarities, a SimEc with a non-linear hidden layer clustersdocuments by their topics (Figure 4).3 C ONTEXT ENCODERSRepresentation learning is very prominent in the field of natural language processing (NLP). Forexample, word embeddings learned by neural network language models were shown to improve theperformance when used as features for supervised learning tasks such as named entity recognition(NER) (Collobert et al., 2011; Turian et al., 2010). The popular word2vec model (Figure 5) learnsmeaningful word embeddings by considering only the words’ local contexts and thanks to its shallowarchitecture it can be trained very efficiently on large corpora. However, an important limiting factorof current word embedding models is that they only learn the representations for words from a fixedvocabulary. This means, if in a task we encounter a new word which was not present in the texts usedfor training, we can not create an embedding for this word without repeating the time consuming5http://scikit-learn.org/stable/datasets/twenty_newsgroups.html6Under review as a conference paper at ICLR 2017Figure 4: 20 newsgroups texts visualized in two dimensions by a non-linear SimEc with one hiddenlayer and the objective to preserve the similarities based on class membership in the embedding.training procedure of the model.6Additionally, word2vec, like many other approaches, only learnsa single representation for every word. However, it is often the case that a single word can havemultiple meanings, e.g. “Washington” is both the name of a US state as well as a former president. Itis only the local context in which these words appear that lets humans resolve this ambiguity andidentify the proper sense of the word in question. While attempts were made to improve this, theylack flexibility as they require a clustering of word contexts beforehand (Huang et al., 2012), whichstill does not guarantee that all possible meanings of a word have been identified prior in the trainingdocuments. Other approaches require additional labels such part-of-speech tags (Trask et al., 2015)or other lexical resources like WordNet (Rothe & Schütze, 2015) to create word embeddings whichdistinguish between the different senses of a word.As a further contribution of this paper we provide a link between the successful word2vec naturallanguage model and similarity encoders and thereby propose a new model we call context encoder(ConEc), which can efficiently learn word embeddings from huge amounts of training data andadditionally make use of local contexts to create representations for out-of-vocabulary words andhelp distinguish between multiple meanings of words.target wordThe black cat slept on the bed. context wordsAfter trainingtarget embedding2R1⇥dTraining phaseW1W0W0N⇥dN⇥dl02R1⇥d1) take sum of context embeddings2) select target and k noise weights (negative sampling)N⇥dl12R(k+1)⇥d3) compute error & backpropagateerr =t(l0·lT1)(z)=11+ezwith:t: binary label vectorFigure 5: Continuous BOW word2vec model trained using negative sampling (Mikolov et al., 2013a;b;Goldberg & Levy, 2014).6In practice these models are trained on such a large vocabulary that it is rare to encounter a word whichdoes not have an embedding. However, there are still scenarios where this is the case, for example, it is unlikelythat the term “W10281545” is encountered in a regular training corpus, but we might still want its embedding torepresent a search query like “whirlpool W10281545 ice maker part”.7Under review as a conference paper at ICLR 2017Formally, word embeddings are d-dimensional vector representations learned for all Nwords inthe vocabulary. Word2vec is a shallow model with parameter matrices W0;W 12RNd, which aretuned iteratively by scanning huge amounts of texts sentence by sentence (see Figure 5). Based onsome context words the algorithm tries to predict the target word between them. Mathematicallythis is realized by first computing the sum of the embeddings of the context words by selecting theappropriate rows from W0. This vector is then multiplied by several rows selected from W1: one ofthese rows corresponds to the target word, while the others correspond to k‘noise’ words, selected atrandom (negative sampling). After applying a non-linear activation function, the backpropagationerror is computed by comparing this output to a label vector t2Rk+1, which is 1 at the position ofthe target word and 0 for all knoise words. After the training of the model is complete, the wordembedding for a target word is the corresponding row of W0.The main principle utilized when learning word embeddings is that similar words appear in similarcontexts (Harris, 1954; Melamud et al., 2015). Therefore, in theory one could compute the similaritiesbetween all words by checking how many context words any two words generally have in common(possibly weighted somehow to reduce the influence of frequent words such as ‘the’ and ‘and’).However, such a word similarity matrix would be very large, as typically the vocabulary for whichword embeddings are learned comprises several 10;000words, making it computationally tooexpensive to be used with similarity encoders. But this matrix would also be quite sparse, becausemany words in fact do not occur in similar contexts and most words only have a handful of synonymswhich could be used in their place. Therefore, we can view the negative sampling approach usedfor word2vec (Mikolov et al., 2013b) as an approximation of the words’ context based similarities:while the similarity of a word to itself is 1, if for one word we select krandom words out of the hugevocabulary, it is very unlikely that they are similar to the target word, i.e. we can approximate theirsimilarities with 0. This is the main insight necessary for adapting similarity encoders to be used forlearning (context sensitive) word embeddings.InputEmbeddingOutputTargetxi2RNyi2Rdsi2Rk+1s02Rk+1theblacksleptoncatFigure 6: Context encoder (ConEc) architecture. The input consists of a context vector, but instead ofcomparing the output to a full similarity vector, only the target word and knoise words are considered.Figure 6 shows the architecture of the context encoder. For the training procedure we stick veryclosely to the optimization strategy used by word2vec: while parsing a document, we again selecta target word and its context words. As input to the context encoder network, we use a vectorxiof lengthN(i.e. the size of the vocabulary), which indicates the context words by non-zerovalues (either binary or e.g. giving lower weight to context words further away from the target word).This vector is then multiplied by a first matrix of weights W02RNdyielding a low dimensionalembeddingyi, comparable to the summed context embedding created as a first step when trainingthe word2vec model. This embedding is then multiplied by a second matrix W12RdNto yieldthe output. Instead of comparing this output vector to a whole row from a word similarity matrix(as we would with similarity encoders), only k+ 1entries are selected, namely those belonging to8Under review as a conference paper at ICLR 2017the target word as well as krandom and unrelated noise words. After applying a non-linearity wecompare these entries s02Rk+1to the binary target vector exactly as in the word2vec model anduse error backpropagation to tune the parameters.Up to now, there are no real differences between the word2vec model and our context encoders, wehave merely provided an intuitive interpretation of the training procedure and objective. The maindeviation from the word2vec model lies in the computation of the word embedding for a target wordafter the training is complete. In the case of word2vec, the word embedding is simply the row ofthe tunedW0matrix. However, when considering the idea behind the optimization procedure, weinstead propose to compute a target word’s representation by multiplying W0with the word’s averagecontext vector. This is closer to what is being done in the training procedure and additionally itenables us to compute the embeddings for out-of-vocabulary words (assuming at least most of sucha new word’s context words are in the vocabulary) as well as to place more emphasis on a word’slocal context (which helps to identify the proper meaning of the word (Melamud et al., 2015)) bycreating a weighted sum between the word’s average global and local context vectors used as input tothe ConEc.With this new perspective on the model and optimization procedure, another advancement is feasible.Since the context words are merely a sparse feature vector used as input to a neural network, thereis no reason why this input vector should not contain other features about the target word as well.For example, the feature vector could be extended to contain information about the word’s case,part-of-speech (POS) tag, or other relevant details. While this would increase the dimensionalityof the first weight matrix W0to include the additional features when mapping the input to theword’s embedding, the training objective and therefore also W1would remain unchanged. Theseadditional features could be especially helpful if details about the words would otherwise get lost inpreprocessing (e.g. by lowercasing) or to retain information about a word’s position in the sentence,which is ignored in a BOW approach. These extended ConEcs are expected to create embeddingswhich distinguish even better between the words’ different senses by taking into account, for example,if the word is used as a noun or verb in the current context, similar to the sense2vec algorithm (Trasket al., 2015). However, unlike sense2vec, not multiple embeddings per term are learned, instead thedimensionality of the input vector is increased to include the POS tag of the current word as a feature.3.1 E XPERIMENTSThe word embeddings learned with word2vec and context encoders are evaluated on a word analogytask (Mikolov et al., 2013a) as well as the CoNLL 2003 NER benchmark task (Tjong et al., 2003).The word2vec model used is a continuous BOW model trained with negative sampling as describedabove where k= 13 , the embedding dimensionality dis200and we use a context window of 5. Theword embeddings created by the context encoders are build directly on top of the word2vec model bymultiplying the original embeddings ( W0) with the respective context vectors. Code to replicate theexperiments can be found online.7The results of the analogy task can be found in the Appendix.8Named Entity Recognition The main advantage of context encoders is that they can use localcontext to create out-of-vocabulary (OOV) embeddings and distinguish between the different sensesof words. The effects of this are most prominent in a task such as named entity recognition (NER)where the local context of a word can make all the difference, e.g. to distinguish between the“Chicago Bears” (an organization) and the city of Chicago (a location). To test this, we used theword embeddings as features in the CoNLL 2003 NER benchmark task (Tjong et al., 2003). Theword2vec embeddings were trained on the documents used in the training part of the task.9For thecontext encoders we experimented with different combinations of local and global context vectors.The global context vectors were computed on only the training documents as well, i.e. just as with7https://github.com/cod3licious/conec8As it was recently demonstrated that a good performance on intrinsic evaluation tasks such as word similarityor analogy tasks does not necessarily transfer to extrinsic evaluation measures when using the word embeddingsas features (Chiu et al., 2016; Linzen, 2016), we consider the performance on the NER challenge as morerelevant.9Since this is a very small corpus, we trained word2vec for 25 iterations on these documents (afterwards theperformance on the development split stopped improving significantly) while usually the model is trained in asingle pass through a much larger corpus.9Under review as a conference paper at ICLR 2017the word2vec model, when applied to the test documents there are some words which don’t have aword embedding available as they did not occur in the training texts. The local context vectors on theother hand can be computed for all words occurring in the current document for which the modelshould identify the named entities. When combining these local context vectors with the global oneswe always use the local context vector as is in case there is no global vector available and otherwisecompute a weighted average between the two context vectors as wlCV local+ (1wl)CV global.10The different word embeddings were used as features with a logistic regression classifier trained onthe labels obtained from the training part of the task and the reported F1-scores were computed usingthe official evaluation script. Please note that we are using this task to show the potential of ConEcword embeddings as features in a real world task and to illustrate their advantages over the regularword2vec embeddings and did not optimize for competitive performance on this NER challenge.global wl=0.wl=0.1wl=0.2wl=0.3wl=0.4wl=0.5wl=0.6wl=0.7wl=0.8wl=0.9 wl=1.202530354045F1-Score [%]NER performance with different word embedding featurestraindevtestABCFigure 7: Results of the CoNLL 2003 NER task based on three random initializations of the word2vecmodel. The overall results are shown on the left, where the mean performance using word2vecembeddings is considered as our baseline indicated by the dashed lines, all other embeddings arecomputed with context encoders using various combinations of the words’ global and local contextvectors. On the right, the increased performance (mean and std) on the test fold achieved by usingConEc is highlighted: Enhancing the word2vec embeddings with global context information yields aperformance gain of 2:5percentage points (A). By additionally using local context vectors to createOOV word embeddings ( wl= 0) we gain another 1:7points (B). When using a combination ofglobal and local context vectors ( wl= 0:4) to distinguish between the different meanings of words,the F1-score increases by another 5:1points (C), yielding a F1-score of 39:92%, which marks asignificant improvement compared to the 30:59% reached with word2vec features.Figure 7 shows the results achieved with various word embeddings on the training, developmentand test part of the CoNLL task. As it can be seen there, taking into account the local context canyield large improvements, especially on the dev and test data. Context encoders using only the globalcontext vectors already perform better than word2vec. When using the local context vectors onlywhere the global ones are not available ( wl= 0) we can see a jump in the development and testperformance, while of course the training performance stays the same as here we have global contextvectors for all words. The best performances on all folds are achieved when averaging the global andlocal context vectors with around wl= 0:4before multiplying them with the word2vec embeddings.This clearly shows that using ConEcs with local context vectors can be very beneficial as they let uscompute word embeddings for out-of-vocabulary words as well as help distinguish between multiplemeanings of words.10The global context matrix is computed without taking the word itself into account (i.e. zero on the diagonal)to make the context vectors comparable to the local context vectors of OOV words where we can’t count thetarget word either. Both global and local context vectors are normalized by their respective maximum values,then multiplied with the length normalized word2vec embeddings and again renormalized to have unit length.10Under review as a conference paper at ICLR 20174 C ONCLUSIONRepresenting intrinsically complex data is an ubiquitous challenge in data analysis. While kernelmethods and manifold learning have made very successful contributions, their ability to scale issomewhat limited. Neural autoencoders offer scalable nonlinear embeddings, but their objective is tominimize the reconstruction error of the input data which does not necessarily preserve importantpairwise relations between data points. In this paper we have proposed SimEcs as a neural networkframework which bridges this gap by optimizing the same objective as spectral methods, such askPCA, for creating similarity preserving embeddings while retaining the favorable properties ofautoencoders.Similarity encoders are a novel method to learn similarity preserving embeddings and can be especiallyuseful when it is computationally infeasible to perform the eigendecomposition of a kernel matrix,when the target similarities are obtained through an unknown process such as human similarityjudgments, or when an explicit mapping function is required. To accomplish this, a feed-forwardneural network is constructed to map the data into an embedding space where the original similaritiescan be approximated linearly.As a second contribution we have defined context encoders, a practical extension of SimEcs, that canbe readily used to enhance the word2vec model with further local context information and global wordstatistics. Most importantly, ConEcs allow to easily create word embeddings for out-of-vocabularywords on the spot and distinguish between different meanings of a word based its local context.Finally, we have demonstrated the usefulness of SimEcs and ConEcs for practical tasks such as thevisualization of data from different domains and to create meaningful word embedding features for aNER task, going beyond the capabilities of traditional methods.Future work will aim to further the theoretical understanding of SimEcs and ConEcs and exploreother application scenarios where using this novel neural network architecture can be beneficial. Asit is often the case with neural network models, determining the optimal architecture as well as otherhyperparameter choices best suited for the task at hand can be difficult. While so far we mainlystudied SimEcs based on fairly simple feed-forward networks, it appears promising to consideralso deeper neural networks and possibly even more elaborate architectures, such as convolutionalnetworks, for the initial mapping step to the embedding space, as in this manner hierarchical structuresin complex data could be reflected. Note furthermore that prior knowledge as well as more generalerror functions could be employed to tailor the embedding to the desired application target(s).ACKNOWLEDGMENTSWe would like to thank Antje Relitz, Christoph Hartmann, Ivana Balaževi ́c, and other anonymousreviewers for their helpful comments on earlier versions of this manuscript. Additionally, FranziskaHorn acknowledges funding from the Elsa-Neumann scholarship from the TU Berlin.REFERENCESYoshua Bengio, Jean-François Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Le Roux, and Marie Ouimet.Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. Advances in neuralinformation processing systems , 16:177–184, 2004.Kerstin Bunte, Michael Biehl, and Barbara Hammer. A general framework for dimensionality-reducing datavisualization mapping. Neural Computation , 24(3):771–804, 2012.Billy Chiu, Anna Korhonen, and Sampo Pyysalo. Intrinsic evaluation of word vectors fails to predict extrinsicperformance. ACL 2016 , pp. 1, 2016.Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Naturallanguage processing (almost) from scratch. The Journal of Machine Learning Research , 12:2493–2537, 2011.Trevor F Cox and Michael AA Cox. Multidimensional scaling . CRC Press, 2000.Yoav Goldberg and Omer Levy. word2vec explained: Deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722 , 2014.Zellig S Harris. Distributional structure. Word , 10(2-3):146–162, 1954.11Under review as a conference paper at ICLR 2017Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.Science , 313(5786):504–507, 2006.Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations viaglobal context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Associationfor Computational Linguistics: Long Papers-Volume 1 , pp. 873–882. ACL, 2012.Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from wordembeddings. Transactions of the Association for Computational Linguistics , 3:211–225, 2015.Tal Linzen. Issues in evaluating semantic spaces using word analogies. arXiv preprint arXiv:1606.07736 , 2016.Oren Melamud, Ido Dagan, and Jacob Goldberger. Modeling word meaning in context with substitute vectors.InHuman Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL ,2015.Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations invector space. arXiv preprint arXiv:1301.3781 , 2013a.Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of wordsand phrases and their compositionality. In Advances in neural information processing systems , pp. 3111–3119,2013b.Klaus-Robert Müller, Sebastian Mika, Gunnar Rätsch, Koji Tsuda, and Bernhard Schölkopf. An introduction tokernel-based learning algorithms. Neural Networks, IEEE Transactions on , 12(2):181–201, 2001.Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for Word Representa-tion. In EMNLP , volume 14, pp. 1532–1543, 2014.Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neuralinformation processing systems , pp. 1177–1184, 2007.Sascha Rothe and Hinrich Schütze. Autoextend: Extending word embeddings to embeddings for synsets andlexemes. arXiv preprint arXiv:1507.01127 , 2015.Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science ,290(5500):2323–2326, 2000.Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Nonlinear component analysis as a kerneleigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlineardimensionality reduction. Science , 290(5500):2319–2323, 2000.Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprintphysics/0004057 , 2000.EF Tjong, Kim Sang, and F De Meulder. Introduction to the CoNLL-2003 Shared Task: Language-IndependentNamed Entity Recognition. In Walter Daelemans and Miles Osborne (eds.), Proceedings of CoNLL-2003 , pp.142–147. Edmonton, Canada, 2003.Andrew Trask, Phil Michalak, and John Liu. sense2vec-a fast and accurate method for word sense disambiguationin neural word embeddings. arXiv preprint arXiv:1511.06388 , 2015.Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method forsemi-supervised learning. In Proceedings of the 48th annual meeting of the association for computationallinguistics , pp. 384–394. Association for Computational Linguistics, 2010.Laurens van der Maaten. Learning a parametric embedding by preserving local structure. In InternationalConference on Artificial Intelligence and Statistics , pp. 384–391, 2009.Laurens van der Maaten. Barnes-Hut-SNE. In Proceedings of the International Conference on LearningRepresentations , 2013.Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine LearningResearch , 9(2579-2605):85, 2008.12Under review as a conference paper at ICLR 2017APPENDIXFigure 8: MNIST digits visualized in two dimensions by isomap and a non-linear SimEc.Figure 9: 20 newsgroups dataset embedded with linear kernel PCA and a corresponding linear SimEc.Analogy task To show that the word embeddings created with context encoders capture meaningfulsemantic and syntactic relationships between words, we evaluated them on the original analogy taskpublished together with the word2vec model (Mikolov et al., 2013a).11This task consists of manyquestions in the form of “ man is to king aswoman is to XXX” where the model is supposed to findthe correct answer queen . This is accomplished by taking the word embedding for king, subtractingfrom it the embedding for man and then adding the embedding for woman . This new word vectorshould then be most similar (with respect to the cosine similarity) to the embedding for queen .12The word2vec and corresponding context encoder model are trained for ten iterations on the text8corpus,13which contains around 17 million words and a vocabulary of about 70k unique words, andthe training part of the 1-billion benchmark dataset,14which contains over 768 million wordswith a vocabulary of 486k unique words.15The results of the analogy task are shown in Table 1. To capture some of the semantic relationsbetween words (e.g. the first four task categories) it can be advantageous to use context encoders, i.e.to weight the word2vec embeddings with the words’ average context vectors - however to achieve thebest results we also had to include the target word itself in these context vectors. One reason for theConEcs’ superior performance on some of the task categories but not others might be that the cityand country names compared in the first four task categories only have a single sense (referring to the11See also https://code.google.com/archive/p/word2vec/ .12Readers familiar with Levy et al. (2015) will recognize this as the 3CosAdd method. We have tried 3CosMulas well, but found that the results did not improve significantly and therefore omitted them here.13http://mattmahoney.net/dc/text8.zip14http://code.google.com/p/1-billion-word-language-modeling-benchmark/15In this experiment we ignore all words which occur less than 5 times in the training corpus.13Under review as a conference paper at ICLR 2017Table 1: Accuracy on the analogy task with mean and standard deviation computed using threerandom seeds when initializing the word2vec model. The best results for each category and corpusare in bold.text8 (10 iter) 1-billionword2vec Context Encoder word2vec ConEccapital-common-countries 63.8 4.7 78.70.2 79.32.2 83.11.2capital-world 34.0 2.1 54.71.3 63.81.4 75.90.4currency 15.4 0.9 19.30.6 13.33.6 14.80.8city-in-state 28.6 1.0 43.60.9 19.61.7 29.61.0family 79.61.5 77.20.4 78.72.2 79.01.4gram1-adjective-to-adverb 11.0 0.9 16.60.7 12.30.5 13.31.1gram2-opposite 24.3 3.0 24.32.0 27.60.1 21.31.1gram3-comparative 64.30.5 63.01.1 83.70.9 76.21.1gram4-superlative 40.32.1 37.61.5 69.40.5 56.21.2gram5-present-participle 30.5 1.0 31.70.4 78.41.0 68.00.7gram6-nationality-adjective 70.61.5 67.21.4 83.80.6 83.80.5gram7-past-tense 30.5 1.8 33.00.6 53.90.9 49.20.7gram8-plural 49.80.3 49.21.2 62.71.9 56.71.0gram9-plural-verbs 41.02.5 30.11.9 68.70.2 45.00.4total 42.1 0.6 46.50.1 57.20.3 55.80.3respective location), while the words asked for in other task categories can have multiple meanings,for example “run” is used as both a verb and a noun and in some contexts refers to the sport activitywhile other times it is used in a more abstract sense, e.g. in the context of someone running forpresident. Therefore, the results in the other task categories might improve if the words’ contextvectors are first clustered and then the ConEc embedding is generated by multiplying with the averageof only those context vectors corresponding to the word sense most appropriate for the task category.14
rkLxFcgNg
rk5upnsxe
ICLR.cc/2017/conference/-/paper584/official/review
{"title": "Overall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficient discussion. ", "rating": "9: Top 15% of accepted papers, strong accept", "review": "*** Paper Summary ***\n\nThis paper proposes a unified view on normalization. The framework encompases layer normalization, batch normalization and local contrast normalization. It also suggests decorrelating the inputs through L1 regularization of the activations. Results are reported on three tasks: CIFAR classification, PTB Language models and super resolution on Berkeley dataset.\n\n*** Review Summary ***\n\nOverall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficient discussion. \n\n*** Detailed Review ***\n\nThe paper is clear and reads well. It lacks a few reference to prior research. Also I am surprised that \"Local Contrast Normalization\" is not said anywhere, as it is a common terminology in the neural network and vision literature. \n\nIt is unclear to me why you chose to pair L1 regularization of the activation and normalization. They seem complementary. Would it make sense to apply L1 regularization to the baseline to highlight it is helpful on its own. Overall, it seems the only thing that brings a consistent improvement across all setups.\n\nOn related work, maybe it would be worthwhile to insist that Local Contrast Normalization (LCN) used to be very popular [Pinto et al, 2008, Jarret et al 2009, Sermanet et al 2012; Quoc Le 2013] and effective. It is great to connect this litterature to current work on layer normalization and batch normalization. Similarly, sparsity or group sparsity of the activation has shown effective in the past [Rozell et al 08, Kavukcuoglu et al 09] and need more exposure today.\n\nFinally, since dropout is so popular but interact poorly with normalizer estimates, I feel it would be worthwhile to report results with dropout beyond the baseline and discuss how the different normalization scheme interact with it.\n\nOverall, I feel it is good to refresh the community about local normalization schemes and other mechanism to favor unit competitions. The paper reads well and reports results on various setups, with sufficent discussion. \n\n*** References ***\n\nJarrett, Kevin, Koray Kavukcuoglu, and Yann Lecun. \"What is the best multi-stage architecture for object recognition?.\" 2009 IEEE 12th International Conference on Computer Vision. IEEE, 2009.\n\nPinto, N., Cox, D., DiCarlo, J.: Why is real-world visual object recognition hard?\nPLoS Comput Biol 4 (2008)\n\nLe, Quoc V. \"Building high-level features using large scale unsupervised learning.\" 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.\n\nP. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house\nnumbers digit classification. In ICPR, 2012.\n\nC. Rozell, D. Johnson, and B. Olshausen. Sparse coding via thresholding and local competition in neural circuits.Neural Computation, 2008.\n\nK. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In CVPR, 2009.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"]
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.
["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"]
https://openreview.net/forum?id=rk5upnsxe
https://openreview.net/pdf?id=rk5upnsxe
https://openreview.net/forum?id=rk5upnsxe&noteId=rkLxFcgNg
Published as a conference paper at ICLR 2017NORMALIZING THE NORMALIZERS : COMPARING ANDEXTENDING NETWORK NORMALIZATION SCHEMESMengye Ren y, Renjie Liaoy, Raquel Urtasuny, Fabian H. Sinzz, Richard S. Zemely>yUniversity of Toronto, Toronto ON, CANADAzBaylor College of Medicine, Houston TX, USA>Canadian Institute for Advanced Research (CIFAR)fmren, rjliao, urtasun g@cs.toronto.edufabian.sinz@epagoge.de, zemel@cs.toronto.eduABSTRACTNormalization techniques have only recently begun to be exploited in supervisedlearning tasks. Batch normalization exploits mini-batch statistics to normalizethe activations. This was shown to speed up training and result in better models.However its success has been very limited when dealing with recurrent neuralnetworks. On the other hand, layer normalization normalizes the activationsacross all activities within a layer. This was shown to work well in the recurrentsetting. In this paper we propose a unified view of normalization techniques, asforms of divisive normalization, which includes layer and batch normalization asspecial cases. Our second contribution is the finding that a small modificationto these normalization schemes, in conjunction with a sparse regularizer on theactivations, leads to significant benefits over standard normalization techniques.We demonstrate the effectiveness of our unified divisive normalization frameworkin the context of convolutional neural nets and recurrent neural networks, showingimprovements over baselines in image classification, language modeling as well assuper-resolution.1 I NTRODUCTIONStandard deep neural networks are difficult to train. Even with non-saturating activation functionssuch as ReLUs (Krizhevsky et al., 2012), gradient vanishing or explosion can still occur, sincethe Jacobian gets multiplied by the input activation of every layer. In AlexNet (Krizhevsky et al.,2012), for instance, the intermediate activations can differ by several orders of magnitude. Tuninghyperparameters governing weight initialization, learning rates, and various forms of regularizationthus become crucial in optimizing performance.In current neural networks, normalization abounds. One technique that has rapidly become a standardis batch normalization (BN) in which the activations are normalized by the mean and standarddeviation of the training mini-batch (Ioffe & Szegedy, 2015). At inference time, the activations arenormalized by the mean and standard deviation of the full dataset. A more recent variant, layernormalization (LN), utilizes the combined activities of all units within a layer as the normalizer (Baet al., 2016). Both of these methods have been shown to ameliorate training difficulties caused bypoor initialization, and help gradient flow in deeper models.A less-explored form of normalization is divisive normalization (DN) (Heeger, 1992), in whicha neuron’s activity is normalized by its neighbors within a layer. This type of normalization isa well established canonical computation of the brain (Carandini & Heeger, 2012) and has beenextensively studied in computational neuroscience and natural image modelling (see Section 2).However, with few exceptions (Jarrett et al., 2009; Krizhevsky et al., 2012) it has received littleattention in conventional supervised deep learning.Here, we provide a unifying view of the different normalization approaches by characterizing themas the same transformation but along different dimensions of a tensor, including normalization acrossindicates equal contribution1Published as a conference paper at ICLR 2017examples, layers in the network, filters in a layer, or instances of a filter response. We explorethe effect of these varieties of normalizations in conjunction with regularization, on the predictionperformance compared to baseline models. The paper thus provides the first study of divisivenormalization in a range of neural network architectures, including convolutional neural networks(CNNs) and recurrent neural networks (RNNs), and tasks such as image classification, languagemodeling and image super-resolution. We find that DN can achieve results on par with BN in CNNnetworks and out-performs it in RNNs and super-resolution, without having to store batch statistics.We show that casting LN as a form of DN by incorporating a smoothing parameter leads to significantgains, in both CNNs and RNNs. We also find advantages in performance and stability by being ableto drive learning with higher learning rate in RNNs using DN. Finally, we demonstrate that adding anL1 regularizer on the activations before normalization is beneficial for all forms of normalization.2 R ELATED WORKIn this section we first review related work on normalization, followed by a brief description ofregularization in neural networks.2.1 N ORMALIZATIONNormalization of data prior to training has a long history in machine learning. For instance, localcontrast normalization used to be a standard effective tool in vision problems (Pinto et al., 2008;Jarrett et al., 2009; Sermanet et al., 2012; Le, 2013). However, until recently, normalization wasusually not part of the machine learning algorithm itself. Two notable exceptions are the originalAlexNet by Krizhevsky et al. (2012) which includes a divisive normalization step over a subset offeatures after ReLU at each pixel location, and the work by Jarrett et al. (2009) who demonstrated thata combination of nonlinearities, normalization and pooling improves object recognition in two-stagenetworks.Recently Ioffe & Szegedy (2015) demonstrated that standardizing the activations of the summedinputs of neurons over training batches can substantially decrease training time in deep neuralnetworks. To avoid covariate shift, where the weight gradients in one layer are highly dependenton previous layer outputs, Batch Normalization (BN) rescales the summed inputs according to theirvariances under the distribution of the mini-batch data. Specifically, if zj;ndenotes the activation ofa neuronjon example n, andB(n)denotes the mini-batch of examples that contains n, then BNcomputes an affine function of the activations standardized over each mini-batch:~zn;j=zn;jE[zj]q1jB(n)j(zn;jE[zj])2+E[zj] =1jB(n)jXm2B(n)zm;jHowever, training performance in Batch Normalization strongly depends on the quality of theaquired statistics and, therefore, the size of the mini-batch. Hence, Batch Normalization is harderto apply in cases for which the batch sizes are small, such as online learning or data parallelism.While classification networks can usually employ relatively larger mini-batches, other applicationssuch as image segmentation with convolutional nets use smaller batches and suffer from degradedperformance. Moreover, application to recurrent neural networks (RNNs) is not straightforward andleads to poor performance (Laurent et al., 2015).Several approaches have been proposed to make Batch Normalization applicable to RNNs. Cooijmanset al. (2016) and Liao & Poggio (2016) collect separate batch statistics for each time step. However,neither of this techniques address the problem of small batch sizes and it is unclear how to generalizethem to unseen time steps.More recently, Ba et al. (2016) proposed Layer Normalization (LN), where the activations arenormalized across all summed inputs within a layer instead of within a batch:~zn;j=zn;jE[zn]q1jL(j)j(zn;jE[zn])2+E[zn] =1jL(j)jXk2L(j)zn;kwhereL(j)contains all of the units in the same layer as j. While promising results have been shownon RNN benchmarks, direct application of layer normalization to convolutional layers often leads to2Published as a conference paper at ICLR 2017a degradation of performance. The authors hypothesize that since the statistics in convolutional layerscan vary quite a bit spatially, normalization with statistics from an entire layer might be suboptimal.Ulyanov et al. (2016) proposed to normalize each example on spatial dimensions but not on channeldimension, and was shown to be effective on image style transfer applications (Gatys et al., 2016).Liao et al. (2016a) proposed to accumulate the normalization statistics over the entire training phase,and showed that this can speed up training in recurrent and online learning without a deterioratingeffect on the performance. Since gradients cannot be backpropagated through this normalizationoperation, the authors use running statistics of the gradients instead.Exploring the normalization of weights instead of activations, Salimans & Kingma (2016) proposed areparametrization of the weights into a scale independent representation and demonstrated that thiscan speed up training time.Divisive Normalization (DN) on the other hand modulates the neural activity by the activity of a poolof neighboring neurons (Heeger, 1992; Bonds, 1989). DN is one of the most well studied and widelyfound transformations in real neural systems, and thus has been called a canonical computation ofthe brain (Carandini & Heeger, 2012). While the exact form of the transformation can differ, allformulations model the response of a neuron ~zjas a ratio between the acitivity in a summation fieldAj, and a norm-like function of the suppression field Bj~zj=Pzi2Ajuizi2+Pzk2Bjwkzpk1p; (1)wherefuigare the summation weights and fwkgthe suppression weights.Previous theoretical studies have outlined several potential computational roles for divisive normal-ization such as sensitivity maximization (Carandini & Heeger, 2012), invariant coding (Olsen et al.,2010), density modelling (Ball ́e et al., 2016), image compression (Malo et al., 2006), distributedneural representations (Simoncelli & Heeger, 1998), stimulus decoding (Ringach, 2009; Froudarakiset al., 2014), winner-take-all mechanisms (Busse et al., 2009), attention (Reynolds & Heeger, 2009),redundancy reduction (Schwartz & Simoncelli, 2001; Sinz & Bethge, 2008; Lyu & Simoncelli, 2008;Sinz & Bethge, 2013), marginalization in neural probabilistic population codes (Beck et al., 2011),and contextual modulations in neural populations and perception (Coen-Cagli et al., 2015; Schwartzet al., 2009).2.2 R EGULARIZATIONVarious regularization techniques have been applied to neural networks for the purpose of improvinggeneralization and reduce overfitting. They can be roughly divided into two categories, depending onwhether they regularize the weights or the activations.Regularization on Weights: The most common regularizer on weights is weight decay which justamounts to using the L2 norm squared of the weight vector. An L1 regularizer (Goodfellow et al.,2016) on the weights can also be adopted to push the learned weights to become sparse. Scardapaneet al. (2016) investigated mixed norms in order to promote group sparsity.Regularization on Activations: Sparsity or group sparsity regularizers on the activations haveshown to be effective in the past (Roz, 2008; Kavukcuoglu et al., 2009) and several regularizers havebeen proposed that act directly on the neural activations. Glorot et al. (2011) add a sparse regularizeron the activations after ReLU to encourage sparse representations. Dropout developed by Srivastavaet al. (2014) applies random masks to the activations in order to discourage them to co-adapt. DeCovproposed by Cogswell et al. (2015) tries to minimize the off-diagonal terms of the sample covariancematrix of activations, thus encouraging the activations to be as decorrelated as possible. Liao et al.(2016b) utilize a clustering-based regularizer to encourage the representations to be compact.3Published as a conference paper at ICLR 2017(a) Batch-Norm(b) Layer-Norm(c) Div-NormFigure 1: Illustration of different normalization schemes, in a CNN. Each HW-sized feature map is depictedas a rectangle; overlays depict instances in the set of Cfilters; and two examples from a mini-batch of size Nare shown, one above the other. The colors show the summation/suppression fields of each scheme.3 A U NIFIED FRAMEWORK FOR NORMALIZING NEURAL NETSWe first compare the three existing forms of normalization, and show that we can modify batchnormalization (BN) and layer normalization (LN) in small ways to make them have a form thatmatches divisive normalization (DN). We present a general formulation of normalization, whereexisting normalizations involve alternative schemes of accumulating information. Finally, we proposea regularization term that can be optimized jointly with these normalization schemes to encouragedecorrelation and/or improve generalization performance.3.1 G ENERAL FORM OF NORMALIZATIONWithout loss of generality, we denote the hidden input activation of one arbitrary layer in a deepneural network as z2RNL. HereNis the mini-batch size. In the case of a CNN, L=HWC,whereH;W are the height and width of the convolutional feature map and Cis the number of filters.For an RNN or fully-connected layers of a neural net, Lis the number of hidden units.Different normalization methods gather statistics from different ranges of the tensor and then performnormalization. Consider the following general form:zn;j=Xiwi;jxn;i+bj (2)vn;j=zn;jEAn;j[z] (3)~zn;j=vn;jp2+EBn;j[v2](4)whereAjandBjare subsets of zandvrespectively.AandBin standard divisive normalizationare referred to as summation and suppression fields (Carandini & Heeger, 2012). One can cast eachnormalization scheme into this general formulation, where the schemes vary based on how theydefine these two fields. These definitions are specified in Table 1. Optional parameters andcanbe added in the form of j~zn;j+jto increase the degree of freedom.Fig. 1 shows a visualization of the normalization field in a 4-D ConvNet tensor setting. Divisivenormalization happens within a local spatial window of neurons across filter channels. Here we setd(;)to be the spatial L1distance.3.2 N EWMODEL COMPONENTSSmoothing the Normalizers: One obvious way in which the normalization schemes differ is interms of the information that they combine for normalizing the activations. A second more subtlebut important difference between standard BN and LN as opposed to DN is the smoothing term ,in the denominator of Eq. (1). This term allows some control of the bias of the variance estimation,effectively smoothing the estimate. This is beneficial because divisive normalization does not utilizeinformation from the mini-batch as in BN, and combines information from a smaller field than LN. A4Published as a conference paper at ICLR 2017Model Range Normalizer BiasBNAn;j=fzm;j:m2[1;N];j2[1;H][1;W]gBn;j=fvm;j:m2[1;N];j2[1;H][1;W]g= 0LNAn;j=fzn;i:i2[1;L]g B n;j=fvn;i:i2[1;L]g = 0DNAn;j=fzn;i:d(i;j)RAg B n;j=fvn;i:d(i;j)RBg0Table 1: Different choices of the summation and suppression fields AandB, as well as the constant inthe normalizer lead to known normalization schemes in neural networks. d(i;j)denotes an arbitrary distancebetween two hidden units iandj, andRdenotes the neighbourhood radius.3 2 1 0 1 2 3Input0123OuputReLUDN+ReLU/uni00A0sigma=4.0DN+ReLU/uni00A0sigma=2.0DN+ReLU/uni00A0sigma=1.0DN+ReLU/uni00A0sigma=0.50 1 2 3 4Input 101234Input 2ReLU0 1 2 3 4Input 101234Input 2DN+ReLU0.00.51.01.52.02.53.03.54.00.00.20.40.60.81.01.2Figure 2: Divisive normalization followed by ReLU can be viewed as a new activation function. Left: Effectof varyingin this activation function. Right: Two units affect each other’s activation in the DN+ReLUformulation.similar but different denominator bias term max(;c)appears in (Jarrett et al., 2009), which is activewhen the activation variance is small. However, the clipping function makes the transformation notinvertible, losing scale information.Moreover, if we take the nonlinear activation function after normalization into consideration, we findthatwill change the overall properties of the non-linearity. To illustrate this effect, we use a simple1-layer network which consists of: two input units, one divisive normalization operator, followed bya ReLU activation function. If we fix one input unit to be 0.5, varying the other one with differentvalues ofproduces different output curves (Fig. 2, left). These curves exhibit different non-linearproperties compared to the standard ReLU. Allowing the other input unit to vary as well results indifferent activation functions of the first unit depending on the activity of the second (Fig. 2, right).This illustrates potential benefits of including this smoothing term , as it effectively modulates therectified response to vary from a linear to a highly saturated response.In this paper we propose modifications of the standard BN and LN which borrow this additive term in the denominator from DN. We study the effect of incorporating this smoother in the respectivenormalization schemes below.L1 regularizer: Filter responses on lower layers in deep neural networks can be quite correlatedwhich might impair the estimate of the variance in the normalizer. More independent representationshelp disentangle latent factors and boost the networks performance (Higgins et al., 2016). Empirically,we found that putting a sparse (L1) regularizerLL1=1NLXn;jjvn;jj (5)on the centered activations vn;jhelps decorrelate the filter responses (Fig. 5). Here, Nis the batchsize andLis the number of hidden units, and LL1is the regularization loss which is added to thetraining loss.A possible explanation for this effect is that the L1 regularizer might have a similar effect as maximumlikelihood estimation of an independent Laplace distribution. To see that, let pv(v)/exp (kvk1)andx=W1v, withWa full rank invertible matrix. Under this model px(x) =pv(Wx)jdetWj.5Published as a conference paper at ICLR 2017Then, minimization of the L1 norm of the activations under the volume-conserving constraint detA=const. corresponds to maximum likelihood on that model, which would encourage decorrelatedresponses. We do not enforce such a constraint, and the filter matrix might even not be invertible.However, the supervised loss function of the network benefits from having diverse non-zero filters.This encourages the network to not collapse filters along the same direction or put them to zero, andmight act as a relaxation of the volume-conserving constraint.3.3 S UMMARY OF NEW MODELSDN and DN*: We propose DN as a new local normalization scheme in neural networks. Inconvolutional layers, it operates on a local spatial window across filter channels, and in fully connectedlayers it operates on a slice of a hidden state vector. Additionally, DN* has a L1 regularizer on thepre-normalization centered activation ( vn;j).BN-s and BN*: To compare with DN and DN*, we also propose modifications to original BN: wedenote BN-s with 2in the denominator’s square root, and BN* with the L1 regularizer on top ofBN-s.LN-s and LN*: We apply the same changes as from BN to BN-s and BN*. In order to narrow thedifferences in the normalization schemes down to a few parameter choices, we additionally removethe affine transformation parameters andfrom LN such that the difference between LN* andDN* is only the size of the normalization field. andcan really be seen as a separate layer and inpractice we find that they do not improve the performance in the presence of 2.4 E XPERIMENTSWe evaluate the normalization schemes on three different tasks:CNN image classification: We apply different normalizations on CNNs trained on theCIFAR-10/100 datasets for image recognition, each of which contains 50,000 trainingimages and 10,000 test images. Each image is of size 32 323 and has been labeled anobject class out of 10 or 100 total number of classes.RNN language modeling: We apply different normalizations on RNNs trained on thePenn Treebank dataset for language modeling, containing 42,068 training sentences, 3,370validation sentences, and 3,761 test sentences.CNN image super-resolution: We train a CNN on low resolution images and learn cascadesof non-linear filters to smooth the upsampled images. We report performance of trainedCNN on the standard Set 14 and Berkeley 200 dataset.For each model, we perform a grid search of three or four choices of each hyperparameter includingthe smoothing constant , and L1 regularization constant , and learning rate on the validation set.4.1 CIFAR E XPERIMENTSWe used the standard CNN model provided in the Caffe library. The architecture is summarized inTable 2. We apply normalization before each ReLU function. We implement DN as a convolutionaloperator, fixing the local window size to 55,33,33for the three convolutional layers in allthe CIFAR experiments.We set the learning rate to 1e-3 and momentum 0.9 for all experiments. The learning rate schedule isset tof5K, 30K, 50Kgfor the baseline model and to f30K, 50K, 80Kgfor all other models. At everystage we multiply the learning rate by 0.1. Weights are randomly initialized from a zero-mean normaldistribution with standard deviation f1e-4, 1e-2, 1e-2gfor the convolutional layers, and f1e-1, 1e-1gfor fully connected layers. Input images are centered on the dataset image mean.Table 3 summarizes the test performances of BN*, LN* and DN*, compared to the performanceof a few baseline models and the standard batch and layer normalizations. We also add standardregularizers to the baseline model: L2 weight decay (WD) and dropout. Adding the smoothingconstant and L1 regularization consistently improves the classification performance, especially for6Published as a conference paper at ICLR 2017Table 2: CIFAR CNN specificationType Size Kernel Strideinput 32323 - -conv +relu 323232 55332 1max pool 161632 33 2conv +relu 161632 553232 1avg pool 8832 33 2conv +relu 8864 553264 1avg pool 4464 33 2fully conn. linear 64 - -fully conn. linear 10or100 - -Table 3: CIFAR-10/100 experimentsModel CIFAR-10 Acc. CIFAR-100 Acc.Baseline 0.7565 0.4409Baseline +WD +Dropout 0.7795 0.4179BN 0.7807 0.4814LN 0.7211 0.4249BN* 0.8179 0.5156LN* 0.8091 0.4957DN* 0.8122 0.5066the original LN. The modification of LN makes it now better than the original BN, and only slightlyworse than BN*. DN* achieves comparable performance to BN* on both datasets, but only relyingon a local neighborhood of hidden units.0 10 20Sigma0.00.20.40.60.81.0|x|CIFAR-10051015202530Layer Number0 10 20Sigma0.00.10.20.30.4|x|CIFAR-100051015202530Layer NumberFigure 3: Input scale (jxj) vs. learnedat each layer, color coded by thelayer number in ResNet-32, trainedon CIFAR-10 (left), and CIFAR-100(right).ResNet Experiments. Residual networks (ResNet) (Heet al., 2016), a type of CNN with residual connections be-tween layers, achieve impressive performance on many imageclassification benchmarks. The original architecture uses BNby default. If we remove BN, the architecture is very difficultto train or converges to a poor solution. We first reproduced theoriginal BN ResNet-32, obtaining 92.6% accuracy on CIFAR-10, and 69.8% on CIFAR-100. Our best DN model achieves91.3% and 66.6%, respectively. While this performance islower than the original BN-ResNet, there is certainly room toimprove as we have not performed any hyperparameter opti-mization. Importantly, the beneficial effects of sigma (2.5%gain on CIFAR-100) and the L1 regularizer (0.5%) are stillfound, even in the presence of other regularization techniquessuch as data augmentation and weight decay in the training.Since the number of sigma hyperparameters scales with thenumber of layers, we found that setting sigma as a learnableparameter for each layer helps the performance (1.3% gain onCIFAR-100). Note that training this parameter is not possiblein the formulation by Jarrett et al. (2009). The learned sigmashows a clear trend: it tends to decrease with depth, and in thelast convolution layer it approaches 0 (see Fig. 3).4.2 RNN EXPERIMENTSTo apply divisive normalization in fully connected layers ofRNNs, we consider a local neighborhood in the hidden state vector hjR:j+R, whereRis the radius7Published as a conference paper at ICLR 2017Table 4: PTB Word-level language modeling experimentsModel LSTM TanH RNN ReLU RNNBaseline 115.720 149.357 147.630BN 123.245 148.052 164.977LN 119.247 154.324 149.128BN* 116.920 129.155 138.947LN* 101.725 129.823 116.609DN* 102.238 123.652 117.868of the neighborhood. Although the hidden states are randomly initialized, this structure will imposelocal competition among the neighbors.vj=zj12R+ 1RXr=Rzj+r (6)~zj=vjq2+12R+1PRr=Rv2j+r(7)We follow Cooijmans et al. (2016)’s batch normalization implementation for RNNs: normalizersare separate for input transformation and hidden transformation. Let BN(),LN(),DN()beBatchNorm, LayerNorm and DivNorm, and gbe either tanh or ReLU.ht+1=g(Wxxt+Whht1+b) (8)h(BN)t+1=g(BN(Wxxt+bx) +BN(Whh(BN)t1+bh)) (9)h(LN)t+1=g(LN(Wxxt+Whh(LN)t1+b)) (10)h(DN )t+1=g(DN(Wxxt+Whh(DN )t1+b)) (11)Note that in recurrent BN, the additional parameters andare shared across timesteps whereas themoving averages of batch statistics are not shared. For the LSTM version, we followed the releasedimplementation from the authors of layer normalization1, and apply LN at the same places as BN andBN*, which is after the linear transformation of WxxandWhhindividually. For LN* and DN, wemodified the places of normalization to be at each non-linearity, instead of jointly with a concatenatedvector for different non-linearity. We found that this modification improves the performance andmakes the formulation clearer since normalization is always a combined operation with the activationfunction. We include details of the LSTM implementation in the Appendix.The RNN model is provided by the Tensorflow library (Abadi et al., 2016) and the LSTM version wasoriginally proposed in Zaremba et al. (2014). We used a two-layer stack-RNN of size 400 (vanillaRNN) or 200 (LSTM). Ris set to 60 (vanilla RNN) and 30 (LSTM). We tried both tanh and ReLU asthe activation function for the vanilla RNN. For unnormalized baselines and BN+ReLU, the initiallearning rate is set to 0.1 and decays by half every epoch, starting at the 5th epoch for a maximum of13 epochs. For the other normalized models, the initial learning rate is set to 1.0 while the schedule iskept the same. Standard stochastic gradient descent is used in all RNN experiments, with gradientclipping at 5.0.Table 4 shows the test set perplexity for LSTM models and vanilla models. Perplexity is defined asppl= exp(Pxlogp(x)). We find that BN and LN alone do not improve the final performancerelative to the baseline, but similar to what we see in the CNN experiments, our modified versionsBN* and LN* show significant improvements. BN* on RNN is outperformed by both LN* and DN.By applying our normalization, we can improve the vanilla RNN perplexity by 20%, comparable toan LSTM baseline with the same hidden dimension.1https://github.com/ryankiros/layer-norm8Published as a conference paper at ICLR 2017Table 5: Average test results of PSNR and SSIM on Set14 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.54 0.7733 26.01 0.7018A+ 29.13 0.8188 27.32 0.7491SRCNN 29.35 0.8212 27.53 0.7512BN 22.31 0.7530 21.40 0.6851DN* 29.38 0.8229 27.64 0.7562Table 6: Average test results of PSNR and SSIM on BSD200 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.19 0.7636 25.92 0.6952A+ 27.05 0.7945 25.51 0.7171SRCNN 28.42 0.8100 26.87 0.7378BN 21.89 0.7553 21.53 0.6741DN* 28.44 0.8110 26.96 0.74284.3 S UPER RESOLUTION EXPERIMENTSWe also evaluate DN on the low-level computer vision problem of single image super-resolution.We adopt the SRCNN model of Dong et al. (2016) as the baseline which consists of 3 convolutionallayers and 2 ReLUs. From bottom to top layers, the sizes of the filters are 9, 5, and 52. The numberof filters are 64, 32, and 1, respectively. All the filters are initialized with zero-mean Gaussian andstandard deviation 1e-3. Then we respectively apply batch normalization (BN) and our divisivenormalization with L1 regularization (DN*) to the convolutional feature maps before ReLUs. Weconstruct the training set in a similar manner as Dong et al. (2016) by randomly cropping 5 millionpatches (size 3333) from a subset of the ImageNet dataset of Deng et al. (2009). We only train ourmodel for 4 million iterations which is less than the one adopted by SRCNN, i.e., 15 million, as thegain of PSNR and SSIM by spending that long time is marginal.We report the average test results, utilizing the standard metrics PSNR and SSIM (Wang et al., 2004),on two standard test datasets Set14 (Zeyde et al., 2010) and BSD200 (Martin et al., 2001). Wecompare with two state-of-the-art single image super-resolution methods, A+ (Timofte et al., 2013)and SRCNN (Dong et al., 2016). All measures are computed on the Y channel of YCbCr color space.We also provide a visual comparison in Fig. 4.As show in Tables 5 and 6 DN* outperforms the strong competitor SRCNN, while BN does notperform well on this task. The reason may be that BN applies the same statistics to all patches ofone image which causes some overall intensity shift (see Figs. 4). From the visual comparisons, wecan see that our method not only enhances the resolution but also removes artifacts, e.g., the ringingeffect in Fig. 4.4.4 A BLATION STUDIES AND DISCUSSIONFinally, we investigated the differential effects of the 2term and the L1 regularizer on the perfor-mance. We ran ablation studies on CIFAR-10/100 as well as PTB experiments. The results are listedin Table 7.We find that adding the smoothing term 2and the L1 regularization consistently increases theperformance of the models. In the convolutional networks, we find that L1 and both have similareffects on the performance. L1 seems to be slightly more important. In recurrent networks, 2has amuch more dramatic effect on the performance than the L1 regularizer.Fig. 5 plots randomly sampled pairwise pre-normalization responses (after the linear transform)in the first layer at the same spatial location of the feature map, along with the average pair-wise2We use the setting of the best model out of all three SRCNN candidates.9Published as a conference paper at ICLR 2017PSNR 29.84dB PSNR 31.33dB PSNR 23.94dB PSNR 31.46dBPSNR 29.41dB PSNR 33.14dB PSNR 21.88dB PSNR 33.43dBPSNR 27.46dB(a) BicubicPSNR 30.12dB(b) SRCNNPSNR 23.91dB(c) BNPSNR 30.19dB(d) DN*Figure 4: Comparisons at a magnification factor of 4.correlation coefficient (Corr) and mutual information (MI). It is evident that both and L1 encouragesindependence of the learned linear filters.There are several factors that could explain the improvement in performance. As mentioned above,adding the L1 regularizer on the activations encourages the filter responses to be less correlated.This can increase the robustness of the variance estimate in the normalizer and lead to an improvedscaling of the responses to a good regime. Furthermore, adding the smoother to the denominatorin the normalizer can be seen as implicitly injecting zero mean noise on the activations. Whilenoise injection would not change the mean, it does add a term to the variance of the data, which isrepresented by 2. This term also makes the normalization equation invertible. While dividing bythe standard deviation decreases the degrees of freedom in the data, the smoothed normalizationequation is fully information preserving. Finally, DN type operations have been shown to decreasethe redundancy of filter responses to natural images and sound (Schwartz & Simoncelli, 2001; Sinz &Bethge, 2008; Lyu & Simoncelli, 2008). In combination with the L1 regularizer this could lead to amore independent representation of the data and thereby increase the performance of the network.5 C ONCLUSIONSWe have proposed a unified view of normalization techniques which contains batch and layernormalization as special cases. We have shown that when combined with a sparse regularizer onthe activations, our framework has significant benefits over standard normalization techniques. Wehave demonstrated this in the context of both convolutional neural nets as well as recurrent neuralnetworks. In the future we plan to explore other regularization techniques such as group sparsity. Wealso plan to conduct a more in-depth analysis of the effects of normalization on the correlations ofthe learned representations.10Published as a conference paper at ICLR 2017Table 7: Comparison of standard batch and layer normalation (BN and LN) models, to those with only L1regularizer (+L1), only the smoothing term (-s), and with both (*). We also compare divisive normalizationwith both (DN*), versus with only the smoothing term (DN).Model CIFAR-10 CIFAR-100 LSTM Tanh RNN ReLU RNNBaseline 0.7565 0.4409 115.720 149.357 147.630Baseline +L1 0.7839 0.4517 111.885 143.965 148.572BN 0.7807 0.4814 123.245 148.052 164.977BN +L1 0.8067 0.5100 123.736 152.777 166.658BN-s 0.8017 0.5005 123.243 131.719 139.159BN* 0.8179 0.5156 116.920 129.155 138.947LN 0.7211 0.4249 119.247 154.324 149.128LN +L1 0.7994 0.4990 116.964 152.100 147.937LN-s 0.8083 0.4863 102.492 133.812 118.786LN* 0.8091 0.4957 101.725 129.823 116.609DN 0.8058 0.4892 103.714 132.143 118.789DN* 0.8122 0.5066 102.238 123.652 117.868BaselineCorr. 0.19MI 0.37BNCorr. 0.43MI 1.20BN +L1Corr. 0.17MI 0.66BN-SCorr. 0.23MI 0.80BN*Corr. 0.17MI 0.66LNCorr. 0.55MI 1.41LN +L1Corr. 0.17MI 0.67LN-SCorr. 0.20MI 0.74LN*Corr. 0.16MI 0.64DNCorr. 0.21MI 0.81DN*Corr. 0.20MI 0.73Figure 5: First layer CNN pre-normalized activation joint histogramAcknowledgements RL is supported by Connaught International Scholarships. FS would like tothank Edgar Y . Walker, Shuang Li, Andreas Tolias and Alex Ecker for helpful discussions. Supportedby the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/InteriorBusiness Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized toreproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotationthereon. Disclaimer: The views and conclusions contained herein are those of the authors and shouldnot be interpreted as necessarily representing the official policies or endorsements, either expressedor implied, of IARPA, DoI/IBC, or the U.S. Government.11Published as a conference paper at ICLR 2017REFERENCESSparse coding via thresholding and local competition in neural circuits. Neural Computation , 20(10):2526–63, 2008. ISSN 08997667. doi: 10.1162/neco.2008.03-07-486.Abadi, Mart ́ın, Barham, Paul, Chen, Jianmin, Chen, Zhifeng, Davis, Andy, Dean, Jeffrey, Devin,Matthieu, Ghemawat, Sanjay, Irving, Geoffrey, Isard, Michael, Kudlur, Manjunath, Levenberg,Josh, Monga, Rajat, Moore, Sherry, Murray, Derek Gordon, Steiner, Benoit, Tucker, Paul A.,Vasudevan, Vijay, Warden, Pete, Wicke, Martin, Yu, Yuan, and Zhang, Xiaoqiang. Tensorflow: Asystem for large-scale machine learning. CoRR , abs/1605.08695, 2016.Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. CoRR ,abs/1607.06450, 2016.Ball ́e, Johannes, Laparra, Valero, and Simoncelli, Eero P. Density modeling of images using ageneralized normalization transformation. ICLR , 2016.Beck, J. M., Latham, P. E., and Pouget, A. Marginalization in Neural Circuits with DivisiveNormalization. The Journal of neuroscience : the official journal of the Society for Neuroscience ,31(43):15310–9, oct 2011. ISSN 1529-2401. doi: 10.1523/JNEUROSCI.1706-11.2011.Bevilacqua, Marco, Roumy, Aline, Guillemot, Christine, and Morel, Marie-Line Alberi. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC ,2012.Bonds, A. B. Role of Inhibition in the Specification of Orientation Selectivity of Cells in the CatStriate Cortex. Visual Neuroscience , 2(01):41–55, 1989.Busse, L., Wade, A. R., and Carandini, M. Representation of Concurrent Stimuli by PopulationActivity in Visual Cortex. Neuron , 64(6):931–942, dec 2009. ISSN 0896-6273. doi: 10.1016/j.neuron.2009.11.004.Carandini, M. and Heeger, D. J. Normalization as a canonical neural computation. Nature reviews.Neuroscience , 13(1):51–62, nov 2012. ISSN 1471-0048. doi: 10.1038/nrn3136.Coen-Cagli, R., Kohn, A., and Schwartz, O. Flexible gating of contextual influences in natural vision.Nature Neuroscience , 18(11):1648–1655, 2015. ISSN 1097-6256. doi: 10.1038/nn.4128.Cogswell, Michael, Ahmed, Faruk, Girshick, Ross, Zitnick, Larry, and Batra, Dhruv. Reducingoverfitting in deep networks by decorrelating representations. ICLR , 2015.Cooijmans, Tim, Ballas, Nicolas, Laurent, C ́esar, and Courville, Aaron. Recurrent batch normaliza-tion. CoRR , abs/1603.09025, 2016.Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scalehierarchical image database. In CVPR , 2009.Dong, Chao, Loy, Chen Change, He, Kaiming, and Tang, Xiaoou. Image super-resolution using deepconvolutional networks. TPAMI , 38(2):295–307, 2016.Froudarakis, Emmanouil, Berens, Philipp, Ecker, Alexander S, Cotton, R James, Sinz, Fabian H,Yatsenko, Dimitri, Saggau, Peter, Bethge, Matthias, and Tolias, Andreas S. Population code inmouse V1 facilitates readout of natural scenes through increased sparseness. Nature neuroscience ,17(6):851–7, apr 2014. ISSN 1546-1726. doi: 10.1038/nn.3707.Gatys, Leon A., Ecker, Alexander S., and Bethge, Matthias. Image style transfer using convolutionalneural networks. In CVPR , 2016.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. InAISTATS , 2011.Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. Book in preparation for MITPress, 2016.12Published as a conference paper at ICLR 2017He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for imagerecognition. In CVPR , 2016.Heeger, D. J. Normalization of cell responses in cat striate cortex. Vis Neurosci , 9(2):181–197, 1992.ISSN 09525238.Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A.Early Visual Concept Learning with Unsupervised Deep Learning. CoRR , abs/1606.05579, 2016.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training byreducing internal covariate shift. In ICML , 2015.Jarrett, K., Kavukcuoglu, K., Ranzato, M. A., and LeCun, Y . What is the best multi-stage architecturefor object recognition? ICCV , 2009.Kavukcuoglu, K., Ranzato, M.’A., Fergus, R., and LeCun, Y . Learning invariant features throughtopographic filter maps. In CVPR Workshops , 2009.Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classification with Deep ConvolutionalNeural Networks. NIPS , 2012.Laurent, C ́esar, Pereyra, Gabriel, Brakel, Phil ́emon, Zhang, Ying, and Bengio, Yoshua. Batchnormalized recurrent neural networks. arXiv preprint arXiv:1510.01378 , 2015.Le, Quoc V . Building high-level features using large scale unsupervised learning. In 2013 IEEEinternational conference on acoustics, speech and signal processing , pp. 8595–8598. IEEE, 2013.Liao, Q. and Poggio, T. Bridging the Gaps Between Residual Learning, Recurrent Neural Networksand Visual Cortex. CoRR , abs/1604.03640, 2016.Liao, Qianli, Kawaguchi, Kenji, and Poggio, Tomaso. Streaming Normalization: Towards Simplerand More Biologically-plausible Normalizations for Online and Recurrent Learning. CoRR ,abs/1610.06160, 2016a.Liao, Renjie, Schwing, Alexander, Zemel, Richard, and Urtasun, Raquel. Learning deep parsimoniousrepresentations. NIPS , 2016b.Lyu, Siwei and Simoncelli, Eero P. Reducing statistical dependencies in natural signals using radialGaussianization. NIPS , 2008.Malo, J., Epifanio, I., Navarro, R., and Simoncelli, E. P. Nonlinear image representation for efficientperceptual coding. TIP, 15(1):68–80, 2006.Martin, David, Fowlkes, Charless, Tal, Doron, and Malik, Jitendra. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In ICCV , 2001.Olsen, S. R, Bhandawat, V ., and Wilson, R. I. Divisive Normalization in Olfactory Population Codes.Neuron , 66(2):287–299, 2010. ISSN 10974199. doi: 10.1016/j.neuron.2010.04.009.Pinto, N., Cox, D. D., and DiCarlo, J. J. Why is Real-World Visual Object Recognition Hard? PLoSComput Biol , 4(1):e27, jan 2008. doi: 10.1371/journal.pcbi.0040027.Reynolds, J. H. and Heeger, D. J. The normalization model of attention. Neuron , 61(2):168–85, jan2009. ISSN 1097-4199. doi: 10.1016/j.neuron.2009.01.002.Ringach, D. L. Population coding under normalization. Vision Research , 50(22):2223–2232, 2009.ISSN 18785646. doi: 10.1016/j.visres.2009.12.007.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization toaccelerate training of deep neural networks. In NIPS , 2016.Scardapane, S., Comminiello, D., Hussain, A., and Uncin, A. Group sparse regularization for deepneural networks. CoRR , abs/1607.00485, 2016.13Published as a conference paper at ICLR 2017Schwartz, O. and Simoncelli, E. P. Natural signal statistics and sensory gain control. Nat Neurosci , 4(8):819–825, 2001. ISSN 1097-6256. doi: 10.1038/90526.Schwartz, O., J., Sejnowski T., and P., Dayan. Perceptual organization in the tilt illusion. Journal ofVision , 9(4):1–20, apr 2009. ISSN 1534-7362.Sermanet, P., Chintala, S., and LeCun, Y . Convolutional neural networks applied to house numbersdigit classification. Proceedings of International Conference on Pattern Recognition ICPR12 ,(Icpr):10–13, 2012. ISSN 1051-4651. doi: 10.0/Linux-x86 64.Simoncelli, E. P. and Heeger, D. J. A model of neuronal responses in visual area MT. Vision Research ,38(5):743–761, 1998.Sinz, Fabian and Bethge, Matthias. Temporal Adaptation Enhances Efficient Contrast Gain Controlon Natural Images. PLoS Computational Biology , 9(1):e1002889, jan 2013. ISSN 1553734X.Sinz, Fabian H and Bethge, Matthias. The Conjoint Effect of Divisive Normalization and OrientationSelectivity on Redundancy Reduction. In NIPS , 2008.Srivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan.Dropout: a simple way to prevent neural networks from overfitting. JMLR , 15(1):1929–1958,2014.Timofte, Radu, De Smet, Vincent, and Van Gool, Luc. Anchored neighborhood regression for fastexample-based super-resolution. In ICCV , 2013.Ulyanov, Dmitry, Vedaldi, Andrea, and Lempitsky, Victor S. Instance normalization: The missingingredient for fast stylization. CoRR , abs/1607.08022, 2016.Wang, Zhou, Bovik, Alan C, Sheikh, Hamid R, and Simoncelli, Eero P. Image quality assessment:from error visibility to structural similarity. TIP, 13(4):600–612, 2004.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization.CoRR , abs/1409.2329, 2014.Zeyde, Roman, Elad, Michael, and Protter, Matan. On single image scale-up using sparse-representations. In International conference on curves and surfaces , pp. 711–730. Springer,2010.14Published as a conference paper at ICLR 2017A E FFECT OF SIGMA AND L1ONCIFAR-10/100 VALIDATION SETWe plot the effect of and L1 regularization on the validation performance in Figure 6. While sigmamakes the most contributions to the improvement, L1 also provides much gain for the original versionof LN and BN.101100Sigma0.680.700.720.740.760.780.800.82CIFAR-10BaselineBNBN_sLNLN_sDN(a)101100Sigma0.380.400.420.440.460.480.50CIFAR-100BaselineBNBN_sLNLN_sDN (b)104103102L10.700.720.740.760.780.800.82CIFAR-10Baseline +L1BN +L1BN*LN +L1LN*DN* (c)104103102L10.400.420.440.460.480.50CIFAR-100Baseline +L1BN +L1BN*LN +L1LN*DN* (d)Figure 6: Validation accuracy on CIFAR-10/100 showing effect of sigma constant (a, b) and L1 regularization(c, d) on BN, LN, and DNB LSTM I MPLEMENTATION DETAILSIn LSTM experiments, we found that have an individual normalizer for each non-linearity (sigmoidand tanh) helps the performance for both LN and DN. Eq. 12-14 are the standard LSTM equations,and letNbe the normalizer function, our new normalizer is replacing the nonlinearity with Eq. 15-16.This modification can also be thought as combining normalization and activation as a single activationfunction.This is different from the released implementation of LN and BN in LSTM, which separatelynormalized the concatenated vector Whht1andWxxt. For all LN* and DN experiments we choosethis new formulation, whereas LN experiments are consistent with the released version.0B@ftitotgt1CA=Whht1+Wxxt+b (12)ct=(ft)ct1+(it)tanh( gt) (13)ht=(ot)tanh( ct) (14)(x) =(N(x)) (15)tanh(x) = tanh( N(x)) (16)C M ORE RESULTS ON IMAGE SUPER -RESOLUTIONWe include results on another standard dataset Set5 Bevilacqua et al. (2012) in Table 8 and showmore visual results in Fig. 7.15Published as a conference paper at ICLR 2017Table 8: Average test results of PSNR and SSIM on Set5 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 30.41 0.8678 28.44 0.8097A+ 32.59 0.9088 30.28 0.8603SRCNN 32.83 0.9087 30.52 0.8621BN 22.85 0.8027 20.71 0.7623DN* 32.83 0.9106 30.62 0.8665PSNR 21.69dB PSNR 22.62dB PSNR 20.06dB PSNR 22.69dBPSNR 31.55dB(a) BicubicPSNR 32.29dB(b) SRCNNPSNR 19.39dB(c) BNPSNR 32.31dB(d) DN*Figure 7: Comparisons at a magnification factor of 4.16
HyhJVhb4l
rk5upnsxe
ICLR.cc/2017/conference/-/paper584/official/review
{"title": "Well written but with little novelty", "rating": "5: Marginally below acceptance threshold", "review": "This paper empirically studies multiple combinations of various tricks to improve the performance of deep neural networks on various tasks. Authors investigate various combinations of normalization techniques together with additional regularizations. \n\nThe paper makes few interesting empirical observations, such that the L1 regularizer on top of the activations is relatively useful for most of the tasks. \n\nIn general, it seems that this work can be significantly improved by providing more precise study of existing normalization techniques. Also, studying more closely the overall volumes of the summation and suppression fields (e.g. how many samples one needs to collect for a robust enough normalization) would be useful.\n\nIn more detail, the work seems to have the following issues:\n* Divisive normalization, is used extensively in Krizhevsky12 (LRN). It is almost exactly the same definition as in equation 1, however with slightly different constants. Therefore claiming that it is less explored is questionable.\n* It is not clear whether the Divisive normalization does subtract the mean from the activation as there is a contradiction in its definition in equation 1 and 3. This questions whether the \"General Formulation of Normalization\" is correct.\n* In seems that Divisive normalization is used also in Jarrett09, called Contrast Normalization, with a definition more similar to equation 3 (subtracting the mean).\n* In case of the RNN experiments, it would be more clear to provide the absolute size of the summation and suppression field as BN may be inferior to DN due to a small batch size.\n* It is unclear what and how are measured the results shown in Table 10. Also it is unclear what are the sizes of the suppression/summation fields for the CIFAR and Super Resolution experiments.\n\nMinor, relatively irrelevant issues:\n* It is usually better to pick a stronger baseline for the tasks. The selected CIFAR model from Caffe seems to be quite far from the state of the art on the CIFAR dataset. A stronger baseline (e.g. the widely available ResNet) would allow to see whether the proposed techniques are useful for the more recent models as well.\n* Double caption for Table 7/8.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"]
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.
["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"]
https://openreview.net/forum?id=rk5upnsxe
https://openreview.net/pdf?id=rk5upnsxe
https://openreview.net/forum?id=rk5upnsxe&noteId=HyhJVhb4l
Published as a conference paper at ICLR 2017NORMALIZING THE NORMALIZERS : COMPARING ANDEXTENDING NETWORK NORMALIZATION SCHEMESMengye Ren y, Renjie Liaoy, Raquel Urtasuny, Fabian H. Sinzz, Richard S. Zemely>yUniversity of Toronto, Toronto ON, CANADAzBaylor College of Medicine, Houston TX, USA>Canadian Institute for Advanced Research (CIFAR)fmren, rjliao, urtasun g@cs.toronto.edufabian.sinz@epagoge.de, zemel@cs.toronto.eduABSTRACTNormalization techniques have only recently begun to be exploited in supervisedlearning tasks. Batch normalization exploits mini-batch statistics to normalizethe activations. This was shown to speed up training and result in better models.However its success has been very limited when dealing with recurrent neuralnetworks. On the other hand, layer normalization normalizes the activationsacross all activities within a layer. This was shown to work well in the recurrentsetting. In this paper we propose a unified view of normalization techniques, asforms of divisive normalization, which includes layer and batch normalization asspecial cases. Our second contribution is the finding that a small modificationto these normalization schemes, in conjunction with a sparse regularizer on theactivations, leads to significant benefits over standard normalization techniques.We demonstrate the effectiveness of our unified divisive normalization frameworkin the context of convolutional neural nets and recurrent neural networks, showingimprovements over baselines in image classification, language modeling as well assuper-resolution.1 I NTRODUCTIONStandard deep neural networks are difficult to train. Even with non-saturating activation functionssuch as ReLUs (Krizhevsky et al., 2012), gradient vanishing or explosion can still occur, sincethe Jacobian gets multiplied by the input activation of every layer. In AlexNet (Krizhevsky et al.,2012), for instance, the intermediate activations can differ by several orders of magnitude. Tuninghyperparameters governing weight initialization, learning rates, and various forms of regularizationthus become crucial in optimizing performance.In current neural networks, normalization abounds. One technique that has rapidly become a standardis batch normalization (BN) in which the activations are normalized by the mean and standarddeviation of the training mini-batch (Ioffe & Szegedy, 2015). At inference time, the activations arenormalized by the mean and standard deviation of the full dataset. A more recent variant, layernormalization (LN), utilizes the combined activities of all units within a layer as the normalizer (Baet al., 2016). Both of these methods have been shown to ameliorate training difficulties caused bypoor initialization, and help gradient flow in deeper models.A less-explored form of normalization is divisive normalization (DN) (Heeger, 1992), in whicha neuron’s activity is normalized by its neighbors within a layer. This type of normalization isa well established canonical computation of the brain (Carandini & Heeger, 2012) and has beenextensively studied in computational neuroscience and natural image modelling (see Section 2).However, with few exceptions (Jarrett et al., 2009; Krizhevsky et al., 2012) it has received littleattention in conventional supervised deep learning.Here, we provide a unifying view of the different normalization approaches by characterizing themas the same transformation but along different dimensions of a tensor, including normalization acrossindicates equal contribution1Published as a conference paper at ICLR 2017examples, layers in the network, filters in a layer, or instances of a filter response. We explorethe effect of these varieties of normalizations in conjunction with regularization, on the predictionperformance compared to baseline models. The paper thus provides the first study of divisivenormalization in a range of neural network architectures, including convolutional neural networks(CNNs) and recurrent neural networks (RNNs), and tasks such as image classification, languagemodeling and image super-resolution. We find that DN can achieve results on par with BN in CNNnetworks and out-performs it in RNNs and super-resolution, without having to store batch statistics.We show that casting LN as a form of DN by incorporating a smoothing parameter leads to significantgains, in both CNNs and RNNs. We also find advantages in performance and stability by being ableto drive learning with higher learning rate in RNNs using DN. Finally, we demonstrate that adding anL1 regularizer on the activations before normalization is beneficial for all forms of normalization.2 R ELATED WORKIn this section we first review related work on normalization, followed by a brief description ofregularization in neural networks.2.1 N ORMALIZATIONNormalization of data prior to training has a long history in machine learning. For instance, localcontrast normalization used to be a standard effective tool in vision problems (Pinto et al., 2008;Jarrett et al., 2009; Sermanet et al., 2012; Le, 2013). However, until recently, normalization wasusually not part of the machine learning algorithm itself. Two notable exceptions are the originalAlexNet by Krizhevsky et al. (2012) which includes a divisive normalization step over a subset offeatures after ReLU at each pixel location, and the work by Jarrett et al. (2009) who demonstrated thata combination of nonlinearities, normalization and pooling improves object recognition in two-stagenetworks.Recently Ioffe & Szegedy (2015) demonstrated that standardizing the activations of the summedinputs of neurons over training batches can substantially decrease training time in deep neuralnetworks. To avoid covariate shift, where the weight gradients in one layer are highly dependenton previous layer outputs, Batch Normalization (BN) rescales the summed inputs according to theirvariances under the distribution of the mini-batch data. Specifically, if zj;ndenotes the activation ofa neuronjon example n, andB(n)denotes the mini-batch of examples that contains n, then BNcomputes an affine function of the activations standardized over each mini-batch:~zn;j=zn;jE[zj]q1jB(n)j(zn;jE[zj])2+E[zj] =1jB(n)jXm2B(n)zm;jHowever, training performance in Batch Normalization strongly depends on the quality of theaquired statistics and, therefore, the size of the mini-batch. Hence, Batch Normalization is harderto apply in cases for which the batch sizes are small, such as online learning or data parallelism.While classification networks can usually employ relatively larger mini-batches, other applicationssuch as image segmentation with convolutional nets use smaller batches and suffer from degradedperformance. Moreover, application to recurrent neural networks (RNNs) is not straightforward andleads to poor performance (Laurent et al., 2015).Several approaches have been proposed to make Batch Normalization applicable to RNNs. Cooijmanset al. (2016) and Liao & Poggio (2016) collect separate batch statistics for each time step. However,neither of this techniques address the problem of small batch sizes and it is unclear how to generalizethem to unseen time steps.More recently, Ba et al. (2016) proposed Layer Normalization (LN), where the activations arenormalized across all summed inputs within a layer instead of within a batch:~zn;j=zn;jE[zn]q1jL(j)j(zn;jE[zn])2+E[zn] =1jL(j)jXk2L(j)zn;kwhereL(j)contains all of the units in the same layer as j. While promising results have been shownon RNN benchmarks, direct application of layer normalization to convolutional layers often leads to2Published as a conference paper at ICLR 2017a degradation of performance. The authors hypothesize that since the statistics in convolutional layerscan vary quite a bit spatially, normalization with statistics from an entire layer might be suboptimal.Ulyanov et al. (2016) proposed to normalize each example on spatial dimensions but not on channeldimension, and was shown to be effective on image style transfer applications (Gatys et al., 2016).Liao et al. (2016a) proposed to accumulate the normalization statistics over the entire training phase,and showed that this can speed up training in recurrent and online learning without a deterioratingeffect on the performance. Since gradients cannot be backpropagated through this normalizationoperation, the authors use running statistics of the gradients instead.Exploring the normalization of weights instead of activations, Salimans & Kingma (2016) proposed areparametrization of the weights into a scale independent representation and demonstrated that thiscan speed up training time.Divisive Normalization (DN) on the other hand modulates the neural activity by the activity of a poolof neighboring neurons (Heeger, 1992; Bonds, 1989). DN is one of the most well studied and widelyfound transformations in real neural systems, and thus has been called a canonical computation ofthe brain (Carandini & Heeger, 2012). While the exact form of the transformation can differ, allformulations model the response of a neuron ~zjas a ratio between the acitivity in a summation fieldAj, and a norm-like function of the suppression field Bj~zj=Pzi2Ajuizi2+Pzk2Bjwkzpk1p; (1)wherefuigare the summation weights and fwkgthe suppression weights.Previous theoretical studies have outlined several potential computational roles for divisive normal-ization such as sensitivity maximization (Carandini & Heeger, 2012), invariant coding (Olsen et al.,2010), density modelling (Ball ́e et al., 2016), image compression (Malo et al., 2006), distributedneural representations (Simoncelli & Heeger, 1998), stimulus decoding (Ringach, 2009; Froudarakiset al., 2014), winner-take-all mechanisms (Busse et al., 2009), attention (Reynolds & Heeger, 2009),redundancy reduction (Schwartz & Simoncelli, 2001; Sinz & Bethge, 2008; Lyu & Simoncelli, 2008;Sinz & Bethge, 2013), marginalization in neural probabilistic population codes (Beck et al., 2011),and contextual modulations in neural populations and perception (Coen-Cagli et al., 2015; Schwartzet al., 2009).2.2 R EGULARIZATIONVarious regularization techniques have been applied to neural networks for the purpose of improvinggeneralization and reduce overfitting. They can be roughly divided into two categories, depending onwhether they regularize the weights or the activations.Regularization on Weights: The most common regularizer on weights is weight decay which justamounts to using the L2 norm squared of the weight vector. An L1 regularizer (Goodfellow et al.,2016) on the weights can also be adopted to push the learned weights to become sparse. Scardapaneet al. (2016) investigated mixed norms in order to promote group sparsity.Regularization on Activations: Sparsity or group sparsity regularizers on the activations haveshown to be effective in the past (Roz, 2008; Kavukcuoglu et al., 2009) and several regularizers havebeen proposed that act directly on the neural activations. Glorot et al. (2011) add a sparse regularizeron the activations after ReLU to encourage sparse representations. Dropout developed by Srivastavaet al. (2014) applies random masks to the activations in order to discourage them to co-adapt. DeCovproposed by Cogswell et al. (2015) tries to minimize the off-diagonal terms of the sample covariancematrix of activations, thus encouraging the activations to be as decorrelated as possible. Liao et al.(2016b) utilize a clustering-based regularizer to encourage the representations to be compact.3Published as a conference paper at ICLR 2017(a) Batch-Norm(b) Layer-Norm(c) Div-NormFigure 1: Illustration of different normalization schemes, in a CNN. Each HW-sized feature map is depictedas a rectangle; overlays depict instances in the set of Cfilters; and two examples from a mini-batch of size Nare shown, one above the other. The colors show the summation/suppression fields of each scheme.3 A U NIFIED FRAMEWORK FOR NORMALIZING NEURAL NETSWe first compare the three existing forms of normalization, and show that we can modify batchnormalization (BN) and layer normalization (LN) in small ways to make them have a form thatmatches divisive normalization (DN). We present a general formulation of normalization, whereexisting normalizations involve alternative schemes of accumulating information. Finally, we proposea regularization term that can be optimized jointly with these normalization schemes to encouragedecorrelation and/or improve generalization performance.3.1 G ENERAL FORM OF NORMALIZATIONWithout loss of generality, we denote the hidden input activation of one arbitrary layer in a deepneural network as z2RNL. HereNis the mini-batch size. In the case of a CNN, L=HWC,whereH;W are the height and width of the convolutional feature map and Cis the number of filters.For an RNN or fully-connected layers of a neural net, Lis the number of hidden units.Different normalization methods gather statistics from different ranges of the tensor and then performnormalization. Consider the following general form:zn;j=Xiwi;jxn;i+bj (2)vn;j=zn;jEAn;j[z] (3)~zn;j=vn;jp2+EBn;j[v2](4)whereAjandBjare subsets of zandvrespectively.AandBin standard divisive normalizationare referred to as summation and suppression fields (Carandini & Heeger, 2012). One can cast eachnormalization scheme into this general formulation, where the schemes vary based on how theydefine these two fields. These definitions are specified in Table 1. Optional parameters andcanbe added in the form of j~zn;j+jto increase the degree of freedom.Fig. 1 shows a visualization of the normalization field in a 4-D ConvNet tensor setting. Divisivenormalization happens within a local spatial window of neurons across filter channels. Here we setd(;)to be the spatial L1distance.3.2 N EWMODEL COMPONENTSSmoothing the Normalizers: One obvious way in which the normalization schemes differ is interms of the information that they combine for normalizing the activations. A second more subtlebut important difference between standard BN and LN as opposed to DN is the smoothing term ,in the denominator of Eq. (1). This term allows some control of the bias of the variance estimation,effectively smoothing the estimate. This is beneficial because divisive normalization does not utilizeinformation from the mini-batch as in BN, and combines information from a smaller field than LN. A4Published as a conference paper at ICLR 2017Model Range Normalizer BiasBNAn;j=fzm;j:m2[1;N];j2[1;H][1;W]gBn;j=fvm;j:m2[1;N];j2[1;H][1;W]g= 0LNAn;j=fzn;i:i2[1;L]g B n;j=fvn;i:i2[1;L]g = 0DNAn;j=fzn;i:d(i;j)RAg B n;j=fvn;i:d(i;j)RBg0Table 1: Different choices of the summation and suppression fields AandB, as well as the constant inthe normalizer lead to known normalization schemes in neural networks. d(i;j)denotes an arbitrary distancebetween two hidden units iandj, andRdenotes the neighbourhood radius.3 2 1 0 1 2 3Input0123OuputReLUDN+ReLU/uni00A0sigma=4.0DN+ReLU/uni00A0sigma=2.0DN+ReLU/uni00A0sigma=1.0DN+ReLU/uni00A0sigma=0.50 1 2 3 4Input 101234Input 2ReLU0 1 2 3 4Input 101234Input 2DN+ReLU0.00.51.01.52.02.53.03.54.00.00.20.40.60.81.01.2Figure 2: Divisive normalization followed by ReLU can be viewed as a new activation function. Left: Effectof varyingin this activation function. Right: Two units affect each other’s activation in the DN+ReLUformulation.similar but different denominator bias term max(;c)appears in (Jarrett et al., 2009), which is activewhen the activation variance is small. However, the clipping function makes the transformation notinvertible, losing scale information.Moreover, if we take the nonlinear activation function after normalization into consideration, we findthatwill change the overall properties of the non-linearity. To illustrate this effect, we use a simple1-layer network which consists of: two input units, one divisive normalization operator, followed bya ReLU activation function. If we fix one input unit to be 0.5, varying the other one with differentvalues ofproduces different output curves (Fig. 2, left). These curves exhibit different non-linearproperties compared to the standard ReLU. Allowing the other input unit to vary as well results indifferent activation functions of the first unit depending on the activity of the second (Fig. 2, right).This illustrates potential benefits of including this smoothing term , as it effectively modulates therectified response to vary from a linear to a highly saturated response.In this paper we propose modifications of the standard BN and LN which borrow this additive term in the denominator from DN. We study the effect of incorporating this smoother in the respectivenormalization schemes below.L1 regularizer: Filter responses on lower layers in deep neural networks can be quite correlatedwhich might impair the estimate of the variance in the normalizer. More independent representationshelp disentangle latent factors and boost the networks performance (Higgins et al., 2016). Empirically,we found that putting a sparse (L1) regularizerLL1=1NLXn;jjvn;jj (5)on the centered activations vn;jhelps decorrelate the filter responses (Fig. 5). Here, Nis the batchsize andLis the number of hidden units, and LL1is the regularization loss which is added to thetraining loss.A possible explanation for this effect is that the L1 regularizer might have a similar effect as maximumlikelihood estimation of an independent Laplace distribution. To see that, let pv(v)/exp (kvk1)andx=W1v, withWa full rank invertible matrix. Under this model px(x) =pv(Wx)jdetWj.5Published as a conference paper at ICLR 2017Then, minimization of the L1 norm of the activations under the volume-conserving constraint detA=const. corresponds to maximum likelihood on that model, which would encourage decorrelatedresponses. We do not enforce such a constraint, and the filter matrix might even not be invertible.However, the supervised loss function of the network benefits from having diverse non-zero filters.This encourages the network to not collapse filters along the same direction or put them to zero, andmight act as a relaxation of the volume-conserving constraint.3.3 S UMMARY OF NEW MODELSDN and DN*: We propose DN as a new local normalization scheme in neural networks. Inconvolutional layers, it operates on a local spatial window across filter channels, and in fully connectedlayers it operates on a slice of a hidden state vector. Additionally, DN* has a L1 regularizer on thepre-normalization centered activation ( vn;j).BN-s and BN*: To compare with DN and DN*, we also propose modifications to original BN: wedenote BN-s with 2in the denominator’s square root, and BN* with the L1 regularizer on top ofBN-s.LN-s and LN*: We apply the same changes as from BN to BN-s and BN*. In order to narrow thedifferences in the normalization schemes down to a few parameter choices, we additionally removethe affine transformation parameters andfrom LN such that the difference between LN* andDN* is only the size of the normalization field. andcan really be seen as a separate layer and inpractice we find that they do not improve the performance in the presence of 2.4 E XPERIMENTSWe evaluate the normalization schemes on three different tasks:CNN image classification: We apply different normalizations on CNNs trained on theCIFAR-10/100 datasets for image recognition, each of which contains 50,000 trainingimages and 10,000 test images. Each image is of size 32 323 and has been labeled anobject class out of 10 or 100 total number of classes.RNN language modeling: We apply different normalizations on RNNs trained on thePenn Treebank dataset for language modeling, containing 42,068 training sentences, 3,370validation sentences, and 3,761 test sentences.CNN image super-resolution: We train a CNN on low resolution images and learn cascadesof non-linear filters to smooth the upsampled images. We report performance of trainedCNN on the standard Set 14 and Berkeley 200 dataset.For each model, we perform a grid search of three or four choices of each hyperparameter includingthe smoothing constant , and L1 regularization constant , and learning rate on the validation set.4.1 CIFAR E XPERIMENTSWe used the standard CNN model provided in the Caffe library. The architecture is summarized inTable 2. We apply normalization before each ReLU function. We implement DN as a convolutionaloperator, fixing the local window size to 55,33,33for the three convolutional layers in allthe CIFAR experiments.We set the learning rate to 1e-3 and momentum 0.9 for all experiments. The learning rate schedule isset tof5K, 30K, 50Kgfor the baseline model and to f30K, 50K, 80Kgfor all other models. At everystage we multiply the learning rate by 0.1. Weights are randomly initialized from a zero-mean normaldistribution with standard deviation f1e-4, 1e-2, 1e-2gfor the convolutional layers, and f1e-1, 1e-1gfor fully connected layers. Input images are centered on the dataset image mean.Table 3 summarizes the test performances of BN*, LN* and DN*, compared to the performanceof a few baseline models and the standard batch and layer normalizations. We also add standardregularizers to the baseline model: L2 weight decay (WD) and dropout. Adding the smoothingconstant and L1 regularization consistently improves the classification performance, especially for6Published as a conference paper at ICLR 2017Table 2: CIFAR CNN specificationType Size Kernel Strideinput 32323 - -conv +relu 323232 55332 1max pool 161632 33 2conv +relu 161632 553232 1avg pool 8832 33 2conv +relu 8864 553264 1avg pool 4464 33 2fully conn. linear 64 - -fully conn. linear 10or100 - -Table 3: CIFAR-10/100 experimentsModel CIFAR-10 Acc. CIFAR-100 Acc.Baseline 0.7565 0.4409Baseline +WD +Dropout 0.7795 0.4179BN 0.7807 0.4814LN 0.7211 0.4249BN* 0.8179 0.5156LN* 0.8091 0.4957DN* 0.8122 0.5066the original LN. The modification of LN makes it now better than the original BN, and only slightlyworse than BN*. DN* achieves comparable performance to BN* on both datasets, but only relyingon a local neighborhood of hidden units.0 10 20Sigma0.00.20.40.60.81.0|x|CIFAR-10051015202530Layer Number0 10 20Sigma0.00.10.20.30.4|x|CIFAR-100051015202530Layer NumberFigure 3: Input scale (jxj) vs. learnedat each layer, color coded by thelayer number in ResNet-32, trainedon CIFAR-10 (left), and CIFAR-100(right).ResNet Experiments. Residual networks (ResNet) (Heet al., 2016), a type of CNN with residual connections be-tween layers, achieve impressive performance on many imageclassification benchmarks. The original architecture uses BNby default. If we remove BN, the architecture is very difficultto train or converges to a poor solution. We first reproduced theoriginal BN ResNet-32, obtaining 92.6% accuracy on CIFAR-10, and 69.8% on CIFAR-100. Our best DN model achieves91.3% and 66.6%, respectively. While this performance islower than the original BN-ResNet, there is certainly room toimprove as we have not performed any hyperparameter opti-mization. Importantly, the beneficial effects of sigma (2.5%gain on CIFAR-100) and the L1 regularizer (0.5%) are stillfound, even in the presence of other regularization techniquessuch as data augmentation and weight decay in the training.Since the number of sigma hyperparameters scales with thenumber of layers, we found that setting sigma as a learnableparameter for each layer helps the performance (1.3% gain onCIFAR-100). Note that training this parameter is not possiblein the formulation by Jarrett et al. (2009). The learned sigmashows a clear trend: it tends to decrease with depth, and in thelast convolution layer it approaches 0 (see Fig. 3).4.2 RNN EXPERIMENTSTo apply divisive normalization in fully connected layers ofRNNs, we consider a local neighborhood in the hidden state vector hjR:j+R, whereRis the radius7Published as a conference paper at ICLR 2017Table 4: PTB Word-level language modeling experimentsModel LSTM TanH RNN ReLU RNNBaseline 115.720 149.357 147.630BN 123.245 148.052 164.977LN 119.247 154.324 149.128BN* 116.920 129.155 138.947LN* 101.725 129.823 116.609DN* 102.238 123.652 117.868of the neighborhood. Although the hidden states are randomly initialized, this structure will imposelocal competition among the neighbors.vj=zj12R+ 1RXr=Rzj+r (6)~zj=vjq2+12R+1PRr=Rv2j+r(7)We follow Cooijmans et al. (2016)’s batch normalization implementation for RNNs: normalizersare separate for input transformation and hidden transformation. Let BN(),LN(),DN()beBatchNorm, LayerNorm and DivNorm, and gbe either tanh or ReLU.ht+1=g(Wxxt+Whht1+b) (8)h(BN)t+1=g(BN(Wxxt+bx) +BN(Whh(BN)t1+bh)) (9)h(LN)t+1=g(LN(Wxxt+Whh(LN)t1+b)) (10)h(DN )t+1=g(DN(Wxxt+Whh(DN )t1+b)) (11)Note that in recurrent BN, the additional parameters andare shared across timesteps whereas themoving averages of batch statistics are not shared. For the LSTM version, we followed the releasedimplementation from the authors of layer normalization1, and apply LN at the same places as BN andBN*, which is after the linear transformation of WxxandWhhindividually. For LN* and DN, wemodified the places of normalization to be at each non-linearity, instead of jointly with a concatenatedvector for different non-linearity. We found that this modification improves the performance andmakes the formulation clearer since normalization is always a combined operation with the activationfunction. We include details of the LSTM implementation in the Appendix.The RNN model is provided by the Tensorflow library (Abadi et al., 2016) and the LSTM version wasoriginally proposed in Zaremba et al. (2014). We used a two-layer stack-RNN of size 400 (vanillaRNN) or 200 (LSTM). Ris set to 60 (vanilla RNN) and 30 (LSTM). We tried both tanh and ReLU asthe activation function for the vanilla RNN. For unnormalized baselines and BN+ReLU, the initiallearning rate is set to 0.1 and decays by half every epoch, starting at the 5th epoch for a maximum of13 epochs. For the other normalized models, the initial learning rate is set to 1.0 while the schedule iskept the same. Standard stochastic gradient descent is used in all RNN experiments, with gradientclipping at 5.0.Table 4 shows the test set perplexity for LSTM models and vanilla models. Perplexity is defined asppl= exp(Pxlogp(x)). We find that BN and LN alone do not improve the final performancerelative to the baseline, but similar to what we see in the CNN experiments, our modified versionsBN* and LN* show significant improvements. BN* on RNN is outperformed by both LN* and DN.By applying our normalization, we can improve the vanilla RNN perplexity by 20%, comparable toan LSTM baseline with the same hidden dimension.1https://github.com/ryankiros/layer-norm8Published as a conference paper at ICLR 2017Table 5: Average test results of PSNR and SSIM on Set14 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.54 0.7733 26.01 0.7018A+ 29.13 0.8188 27.32 0.7491SRCNN 29.35 0.8212 27.53 0.7512BN 22.31 0.7530 21.40 0.6851DN* 29.38 0.8229 27.64 0.7562Table 6: Average test results of PSNR and SSIM on BSD200 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.19 0.7636 25.92 0.6952A+ 27.05 0.7945 25.51 0.7171SRCNN 28.42 0.8100 26.87 0.7378BN 21.89 0.7553 21.53 0.6741DN* 28.44 0.8110 26.96 0.74284.3 S UPER RESOLUTION EXPERIMENTSWe also evaluate DN on the low-level computer vision problem of single image super-resolution.We adopt the SRCNN model of Dong et al. (2016) as the baseline which consists of 3 convolutionallayers and 2 ReLUs. From bottom to top layers, the sizes of the filters are 9, 5, and 52. The numberof filters are 64, 32, and 1, respectively. All the filters are initialized with zero-mean Gaussian andstandard deviation 1e-3. Then we respectively apply batch normalization (BN) and our divisivenormalization with L1 regularization (DN*) to the convolutional feature maps before ReLUs. Weconstruct the training set in a similar manner as Dong et al. (2016) by randomly cropping 5 millionpatches (size 3333) from a subset of the ImageNet dataset of Deng et al. (2009). We only train ourmodel for 4 million iterations which is less than the one adopted by SRCNN, i.e., 15 million, as thegain of PSNR and SSIM by spending that long time is marginal.We report the average test results, utilizing the standard metrics PSNR and SSIM (Wang et al., 2004),on two standard test datasets Set14 (Zeyde et al., 2010) and BSD200 (Martin et al., 2001). Wecompare with two state-of-the-art single image super-resolution methods, A+ (Timofte et al., 2013)and SRCNN (Dong et al., 2016). All measures are computed on the Y channel of YCbCr color space.We also provide a visual comparison in Fig. 4.As show in Tables 5 and 6 DN* outperforms the strong competitor SRCNN, while BN does notperform well on this task. The reason may be that BN applies the same statistics to all patches ofone image which causes some overall intensity shift (see Figs. 4). From the visual comparisons, wecan see that our method not only enhances the resolution but also removes artifacts, e.g., the ringingeffect in Fig. 4.4.4 A BLATION STUDIES AND DISCUSSIONFinally, we investigated the differential effects of the 2term and the L1 regularizer on the perfor-mance. We ran ablation studies on CIFAR-10/100 as well as PTB experiments. The results are listedin Table 7.We find that adding the smoothing term 2and the L1 regularization consistently increases theperformance of the models. In the convolutional networks, we find that L1 and both have similareffects on the performance. L1 seems to be slightly more important. In recurrent networks, 2has amuch more dramatic effect on the performance than the L1 regularizer.Fig. 5 plots randomly sampled pairwise pre-normalization responses (after the linear transform)in the first layer at the same spatial location of the feature map, along with the average pair-wise2We use the setting of the best model out of all three SRCNN candidates.9Published as a conference paper at ICLR 2017PSNR 29.84dB PSNR 31.33dB PSNR 23.94dB PSNR 31.46dBPSNR 29.41dB PSNR 33.14dB PSNR 21.88dB PSNR 33.43dBPSNR 27.46dB(a) BicubicPSNR 30.12dB(b) SRCNNPSNR 23.91dB(c) BNPSNR 30.19dB(d) DN*Figure 4: Comparisons at a magnification factor of 4.correlation coefficient (Corr) and mutual information (MI). It is evident that both and L1 encouragesindependence of the learned linear filters.There are several factors that could explain the improvement in performance. As mentioned above,adding the L1 regularizer on the activations encourages the filter responses to be less correlated.This can increase the robustness of the variance estimate in the normalizer and lead to an improvedscaling of the responses to a good regime. Furthermore, adding the smoother to the denominatorin the normalizer can be seen as implicitly injecting zero mean noise on the activations. Whilenoise injection would not change the mean, it does add a term to the variance of the data, which isrepresented by 2. This term also makes the normalization equation invertible. While dividing bythe standard deviation decreases the degrees of freedom in the data, the smoothed normalizationequation is fully information preserving. Finally, DN type operations have been shown to decreasethe redundancy of filter responses to natural images and sound (Schwartz & Simoncelli, 2001; Sinz &Bethge, 2008; Lyu & Simoncelli, 2008). In combination with the L1 regularizer this could lead to amore independent representation of the data and thereby increase the performance of the network.5 C ONCLUSIONSWe have proposed a unified view of normalization techniques which contains batch and layernormalization as special cases. We have shown that when combined with a sparse regularizer onthe activations, our framework has significant benefits over standard normalization techniques. Wehave demonstrated this in the context of both convolutional neural nets as well as recurrent neuralnetworks. In the future we plan to explore other regularization techniques such as group sparsity. Wealso plan to conduct a more in-depth analysis of the effects of normalization on the correlations ofthe learned representations.10Published as a conference paper at ICLR 2017Table 7: Comparison of standard batch and layer normalation (BN and LN) models, to those with only L1regularizer (+L1), only the smoothing term (-s), and with both (*). We also compare divisive normalizationwith both (DN*), versus with only the smoothing term (DN).Model CIFAR-10 CIFAR-100 LSTM Tanh RNN ReLU RNNBaseline 0.7565 0.4409 115.720 149.357 147.630Baseline +L1 0.7839 0.4517 111.885 143.965 148.572BN 0.7807 0.4814 123.245 148.052 164.977BN +L1 0.8067 0.5100 123.736 152.777 166.658BN-s 0.8017 0.5005 123.243 131.719 139.159BN* 0.8179 0.5156 116.920 129.155 138.947LN 0.7211 0.4249 119.247 154.324 149.128LN +L1 0.7994 0.4990 116.964 152.100 147.937LN-s 0.8083 0.4863 102.492 133.812 118.786LN* 0.8091 0.4957 101.725 129.823 116.609DN 0.8058 0.4892 103.714 132.143 118.789DN* 0.8122 0.5066 102.238 123.652 117.868BaselineCorr. 0.19MI 0.37BNCorr. 0.43MI 1.20BN +L1Corr. 0.17MI 0.66BN-SCorr. 0.23MI 0.80BN*Corr. 0.17MI 0.66LNCorr. 0.55MI 1.41LN +L1Corr. 0.17MI 0.67LN-SCorr. 0.20MI 0.74LN*Corr. 0.16MI 0.64DNCorr. 0.21MI 0.81DN*Corr. 0.20MI 0.73Figure 5: First layer CNN pre-normalized activation joint histogramAcknowledgements RL is supported by Connaught International Scholarships. FS would like tothank Edgar Y . Walker, Shuang Li, Andreas Tolias and Alex Ecker for helpful discussions. Supportedby the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/InteriorBusiness Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized toreproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotationthereon. Disclaimer: The views and conclusions contained herein are those of the authors and shouldnot be interpreted as necessarily representing the official policies or endorsements, either expressedor implied, of IARPA, DoI/IBC, or the U.S. Government.11Published as a conference paper at ICLR 2017REFERENCESSparse coding via thresholding and local competition in neural circuits. Neural Computation , 20(10):2526–63, 2008. ISSN 08997667. doi: 10.1162/neco.2008.03-07-486.Abadi, Mart ́ın, Barham, Paul, Chen, Jianmin, Chen, Zhifeng, Davis, Andy, Dean, Jeffrey, Devin,Matthieu, Ghemawat, Sanjay, Irving, Geoffrey, Isard, Michael, Kudlur, Manjunath, Levenberg,Josh, Monga, Rajat, Moore, Sherry, Murray, Derek Gordon, Steiner, Benoit, Tucker, Paul A.,Vasudevan, Vijay, Warden, Pete, Wicke, Martin, Yu, Yuan, and Zhang, Xiaoqiang. Tensorflow: Asystem for large-scale machine learning. CoRR , abs/1605.08695, 2016.Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. CoRR ,abs/1607.06450, 2016.Ball ́e, Johannes, Laparra, Valero, and Simoncelli, Eero P. Density modeling of images using ageneralized normalization transformation. ICLR , 2016.Beck, J. M., Latham, P. E., and Pouget, A. Marginalization in Neural Circuits with DivisiveNormalization. The Journal of neuroscience : the official journal of the Society for Neuroscience ,31(43):15310–9, oct 2011. ISSN 1529-2401. doi: 10.1523/JNEUROSCI.1706-11.2011.Bevilacqua, Marco, Roumy, Aline, Guillemot, Christine, and Morel, Marie-Line Alberi. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC ,2012.Bonds, A. B. Role of Inhibition in the Specification of Orientation Selectivity of Cells in the CatStriate Cortex. Visual Neuroscience , 2(01):41–55, 1989.Busse, L., Wade, A. R., and Carandini, M. Representation of Concurrent Stimuli by PopulationActivity in Visual Cortex. Neuron , 64(6):931–942, dec 2009. ISSN 0896-6273. doi: 10.1016/j.neuron.2009.11.004.Carandini, M. and Heeger, D. J. Normalization as a canonical neural computation. Nature reviews.Neuroscience , 13(1):51–62, nov 2012. ISSN 1471-0048. doi: 10.1038/nrn3136.Coen-Cagli, R., Kohn, A., and Schwartz, O. Flexible gating of contextual influences in natural vision.Nature Neuroscience , 18(11):1648–1655, 2015. ISSN 1097-6256. doi: 10.1038/nn.4128.Cogswell, Michael, Ahmed, Faruk, Girshick, Ross, Zitnick, Larry, and Batra, Dhruv. Reducingoverfitting in deep networks by decorrelating representations. ICLR , 2015.Cooijmans, Tim, Ballas, Nicolas, Laurent, C ́esar, and Courville, Aaron. Recurrent batch normaliza-tion. CoRR , abs/1603.09025, 2016.Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scalehierarchical image database. In CVPR , 2009.Dong, Chao, Loy, Chen Change, He, Kaiming, and Tang, Xiaoou. Image super-resolution using deepconvolutional networks. TPAMI , 38(2):295–307, 2016.Froudarakis, Emmanouil, Berens, Philipp, Ecker, Alexander S, Cotton, R James, Sinz, Fabian H,Yatsenko, Dimitri, Saggau, Peter, Bethge, Matthias, and Tolias, Andreas S. Population code inmouse V1 facilitates readout of natural scenes through increased sparseness. Nature neuroscience ,17(6):851–7, apr 2014. ISSN 1546-1726. doi: 10.1038/nn.3707.Gatys, Leon A., Ecker, Alexander S., and Bethge, Matthias. Image style transfer using convolutionalneural networks. In CVPR , 2016.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. InAISTATS , 2011.Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. Book in preparation for MITPress, 2016.12Published as a conference paper at ICLR 2017He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for imagerecognition. In CVPR , 2016.Heeger, D. J. Normalization of cell responses in cat striate cortex. Vis Neurosci , 9(2):181–197, 1992.ISSN 09525238.Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A.Early Visual Concept Learning with Unsupervised Deep Learning. CoRR , abs/1606.05579, 2016.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training byreducing internal covariate shift. In ICML , 2015.Jarrett, K., Kavukcuoglu, K., Ranzato, M. A., and LeCun, Y . What is the best multi-stage architecturefor object recognition? ICCV , 2009.Kavukcuoglu, K., Ranzato, M.’A., Fergus, R., and LeCun, Y . Learning invariant features throughtopographic filter maps. In CVPR Workshops , 2009.Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classification with Deep ConvolutionalNeural Networks. NIPS , 2012.Laurent, C ́esar, Pereyra, Gabriel, Brakel, Phil ́emon, Zhang, Ying, and Bengio, Yoshua. Batchnormalized recurrent neural networks. arXiv preprint arXiv:1510.01378 , 2015.Le, Quoc V . Building high-level features using large scale unsupervised learning. In 2013 IEEEinternational conference on acoustics, speech and signal processing , pp. 8595–8598. IEEE, 2013.Liao, Q. and Poggio, T. Bridging the Gaps Between Residual Learning, Recurrent Neural Networksand Visual Cortex. CoRR , abs/1604.03640, 2016.Liao, Qianli, Kawaguchi, Kenji, and Poggio, Tomaso. Streaming Normalization: Towards Simplerand More Biologically-plausible Normalizations for Online and Recurrent Learning. CoRR ,abs/1610.06160, 2016a.Liao, Renjie, Schwing, Alexander, Zemel, Richard, and Urtasun, Raquel. Learning deep parsimoniousrepresentations. NIPS , 2016b.Lyu, Siwei and Simoncelli, Eero P. Reducing statistical dependencies in natural signals using radialGaussianization. NIPS , 2008.Malo, J., Epifanio, I., Navarro, R., and Simoncelli, E. P. Nonlinear image representation for efficientperceptual coding. TIP, 15(1):68–80, 2006.Martin, David, Fowlkes, Charless, Tal, Doron, and Malik, Jitendra. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In ICCV , 2001.Olsen, S. R, Bhandawat, V ., and Wilson, R. I. Divisive Normalization in Olfactory Population Codes.Neuron , 66(2):287–299, 2010. ISSN 10974199. doi: 10.1016/j.neuron.2010.04.009.Pinto, N., Cox, D. D., and DiCarlo, J. J. Why is Real-World Visual Object Recognition Hard? PLoSComput Biol , 4(1):e27, jan 2008. doi: 10.1371/journal.pcbi.0040027.Reynolds, J. H. and Heeger, D. J. The normalization model of attention. Neuron , 61(2):168–85, jan2009. ISSN 1097-4199. doi: 10.1016/j.neuron.2009.01.002.Ringach, D. L. Population coding under normalization. Vision Research , 50(22):2223–2232, 2009.ISSN 18785646. doi: 10.1016/j.visres.2009.12.007.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization toaccelerate training of deep neural networks. In NIPS , 2016.Scardapane, S., Comminiello, D., Hussain, A., and Uncin, A. Group sparse regularization for deepneural networks. CoRR , abs/1607.00485, 2016.13Published as a conference paper at ICLR 2017Schwartz, O. and Simoncelli, E. P. Natural signal statistics and sensory gain control. Nat Neurosci , 4(8):819–825, 2001. ISSN 1097-6256. doi: 10.1038/90526.Schwartz, O., J., Sejnowski T., and P., Dayan. Perceptual organization in the tilt illusion. Journal ofVision , 9(4):1–20, apr 2009. ISSN 1534-7362.Sermanet, P., Chintala, S., and LeCun, Y . Convolutional neural networks applied to house numbersdigit classification. Proceedings of International Conference on Pattern Recognition ICPR12 ,(Icpr):10–13, 2012. ISSN 1051-4651. doi: 10.0/Linux-x86 64.Simoncelli, E. P. and Heeger, D. J. A model of neuronal responses in visual area MT. Vision Research ,38(5):743–761, 1998.Sinz, Fabian and Bethge, Matthias. Temporal Adaptation Enhances Efficient Contrast Gain Controlon Natural Images. PLoS Computational Biology , 9(1):e1002889, jan 2013. ISSN 1553734X.Sinz, Fabian H and Bethge, Matthias. The Conjoint Effect of Divisive Normalization and OrientationSelectivity on Redundancy Reduction. In NIPS , 2008.Srivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan.Dropout: a simple way to prevent neural networks from overfitting. JMLR , 15(1):1929–1958,2014.Timofte, Radu, De Smet, Vincent, and Van Gool, Luc. Anchored neighborhood regression for fastexample-based super-resolution. In ICCV , 2013.Ulyanov, Dmitry, Vedaldi, Andrea, and Lempitsky, Victor S. Instance normalization: The missingingredient for fast stylization. CoRR , abs/1607.08022, 2016.Wang, Zhou, Bovik, Alan C, Sheikh, Hamid R, and Simoncelli, Eero P. Image quality assessment:from error visibility to structural similarity. TIP, 13(4):600–612, 2004.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization.CoRR , abs/1409.2329, 2014.Zeyde, Roman, Elad, Michael, and Protter, Matan. On single image scale-up using sparse-representations. In International conference on curves and surfaces , pp. 711–730. Springer,2010.14Published as a conference paper at ICLR 2017A E FFECT OF SIGMA AND L1ONCIFAR-10/100 VALIDATION SETWe plot the effect of and L1 regularization on the validation performance in Figure 6. While sigmamakes the most contributions to the improvement, L1 also provides much gain for the original versionof LN and BN.101100Sigma0.680.700.720.740.760.780.800.82CIFAR-10BaselineBNBN_sLNLN_sDN(a)101100Sigma0.380.400.420.440.460.480.50CIFAR-100BaselineBNBN_sLNLN_sDN (b)104103102L10.700.720.740.760.780.800.82CIFAR-10Baseline +L1BN +L1BN*LN +L1LN*DN* (c)104103102L10.400.420.440.460.480.50CIFAR-100Baseline +L1BN +L1BN*LN +L1LN*DN* (d)Figure 6: Validation accuracy on CIFAR-10/100 showing effect of sigma constant (a, b) and L1 regularization(c, d) on BN, LN, and DNB LSTM I MPLEMENTATION DETAILSIn LSTM experiments, we found that have an individual normalizer for each non-linearity (sigmoidand tanh) helps the performance for both LN and DN. Eq. 12-14 are the standard LSTM equations,and letNbe the normalizer function, our new normalizer is replacing the nonlinearity with Eq. 15-16.This modification can also be thought as combining normalization and activation as a single activationfunction.This is different from the released implementation of LN and BN in LSTM, which separatelynormalized the concatenated vector Whht1andWxxt. For all LN* and DN experiments we choosethis new formulation, whereas LN experiments are consistent with the released version.0B@ftitotgt1CA=Whht1+Wxxt+b (12)ct=(ft)ct1+(it)tanh( gt) (13)ht=(ot)tanh( ct) (14)(x) =(N(x)) (15)tanh(x) = tanh( N(x)) (16)C M ORE RESULTS ON IMAGE SUPER -RESOLUTIONWe include results on another standard dataset Set5 Bevilacqua et al. (2012) in Table 8 and showmore visual results in Fig. 7.15Published as a conference paper at ICLR 2017Table 8: Average test results of PSNR and SSIM on Set5 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 30.41 0.8678 28.44 0.8097A+ 32.59 0.9088 30.28 0.8603SRCNN 32.83 0.9087 30.52 0.8621BN 22.85 0.8027 20.71 0.7623DN* 32.83 0.9106 30.62 0.8665PSNR 21.69dB PSNR 22.62dB PSNR 20.06dB PSNR 22.69dBPSNR 31.55dB(a) BicubicPSNR 32.29dB(b) SRCNNPSNR 19.39dB(c) BNPSNR 32.31dB(d) DN*Figure 7: Comparisons at a magnification factor of 4.16
rJ3Df4vVe
rk5upnsxe
ICLR.cc/2017/conference/-/paper584/official/review
{"title": "Review of \"NORMALIZING THE NORMALIZERS: COMPARING AND EXTENDING NETWORK NORMALIZATION SCHEMES\"", "rating": "7: Good paper, accept", "review": "The authors present a unified framework for various divisive normalization schemes, and then show that a somewhat novel version of normalization does somewhat better on several tasks than some mid-strength baselines. \n\nPros:\n\n* It has seemed for a while that there are a bunch of different normalization methods out there, of varying importance in varying applications, so having a standardized framework for them all, and evaluating them carefully and systematically, is a very useful contribution.\n* The paper is clearly written. \n* From an architectural standpoint, the actual comparisons seem well motivated. (For instance, I'm glad they tried DN* and BN* -- if they hadn't tried those, I would have wanted them too.) \n\nCons:\n\n* I'm not really sure what the difference is between their new DN method and standard cross-channel local contrast normalization. (Oh, actually -- looking at the other reviews, everyone else seems to have noticed this too. I'll not beat a dead horse about this any further.)\n\n* I'm nervous that the conclusions that they state might not hold on larger, stronger tasks, like ImageNet, and with larger deeper models. I myself have found that while with smaller models on simpler tasks (e.g. Caltech 101), contrast normalization was really useful, that it became much less useful for larger architectures on larger tasks. In fact, if I recall correctly, the original AlexNet model had a type of cross-unit normalization in it, but this was dispensed with in more recent models (I think after Zeiler and Fergus 2013) largely because it didn't contribute that much to performance but was somewhat expensive computationally. Of course, batch normalization methods have definitely been shown to contribute performance on large problems with large models, but I think it would be really important to show the same with the DN methods here, before any definite conclusion could be reached. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes
["Mengye Ren", "Renjie Liao", "Raquel Urtasun", "Fabian H. Sinz", "Richard S. Zemel"]
Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.
["activations", "normalizers", "comparing", "normalization techniques", "batch normalization", "recurrent neural networks", "layer", "network normalization schemes", "network normalization", "supervised learning tasks"]
https://openreview.net/forum?id=rk5upnsxe
https://openreview.net/pdf?id=rk5upnsxe
https://openreview.net/forum?id=rk5upnsxe&noteId=rJ3Df4vVe
Published as a conference paper at ICLR 2017NORMALIZING THE NORMALIZERS : COMPARING ANDEXTENDING NETWORK NORMALIZATION SCHEMESMengye Ren y, Renjie Liaoy, Raquel Urtasuny, Fabian H. Sinzz, Richard S. Zemely>yUniversity of Toronto, Toronto ON, CANADAzBaylor College of Medicine, Houston TX, USA>Canadian Institute for Advanced Research (CIFAR)fmren, rjliao, urtasun g@cs.toronto.edufabian.sinz@epagoge.de, zemel@cs.toronto.eduABSTRACTNormalization techniques have only recently begun to be exploited in supervisedlearning tasks. Batch normalization exploits mini-batch statistics to normalizethe activations. This was shown to speed up training and result in better models.However its success has been very limited when dealing with recurrent neuralnetworks. On the other hand, layer normalization normalizes the activationsacross all activities within a layer. This was shown to work well in the recurrentsetting. In this paper we propose a unified view of normalization techniques, asforms of divisive normalization, which includes layer and batch normalization asspecial cases. Our second contribution is the finding that a small modificationto these normalization schemes, in conjunction with a sparse regularizer on theactivations, leads to significant benefits over standard normalization techniques.We demonstrate the effectiveness of our unified divisive normalization frameworkin the context of convolutional neural nets and recurrent neural networks, showingimprovements over baselines in image classification, language modeling as well assuper-resolution.1 I NTRODUCTIONStandard deep neural networks are difficult to train. Even with non-saturating activation functionssuch as ReLUs (Krizhevsky et al., 2012), gradient vanishing or explosion can still occur, sincethe Jacobian gets multiplied by the input activation of every layer. In AlexNet (Krizhevsky et al.,2012), for instance, the intermediate activations can differ by several orders of magnitude. Tuninghyperparameters governing weight initialization, learning rates, and various forms of regularizationthus become crucial in optimizing performance.In current neural networks, normalization abounds. One technique that has rapidly become a standardis batch normalization (BN) in which the activations are normalized by the mean and standarddeviation of the training mini-batch (Ioffe & Szegedy, 2015). At inference time, the activations arenormalized by the mean and standard deviation of the full dataset. A more recent variant, layernormalization (LN), utilizes the combined activities of all units within a layer as the normalizer (Baet al., 2016). Both of these methods have been shown to ameliorate training difficulties caused bypoor initialization, and help gradient flow in deeper models.A less-explored form of normalization is divisive normalization (DN) (Heeger, 1992), in whicha neuron’s activity is normalized by its neighbors within a layer. This type of normalization isa well established canonical computation of the brain (Carandini & Heeger, 2012) and has beenextensively studied in computational neuroscience and natural image modelling (see Section 2).However, with few exceptions (Jarrett et al., 2009; Krizhevsky et al., 2012) it has received littleattention in conventional supervised deep learning.Here, we provide a unifying view of the different normalization approaches by characterizing themas the same transformation but along different dimensions of a tensor, including normalization acrossindicates equal contribution1Published as a conference paper at ICLR 2017examples, layers in the network, filters in a layer, or instances of a filter response. We explorethe effect of these varieties of normalizations in conjunction with regularization, on the predictionperformance compared to baseline models. The paper thus provides the first study of divisivenormalization in a range of neural network architectures, including convolutional neural networks(CNNs) and recurrent neural networks (RNNs), and tasks such as image classification, languagemodeling and image super-resolution. We find that DN can achieve results on par with BN in CNNnetworks and out-performs it in RNNs and super-resolution, without having to store batch statistics.We show that casting LN as a form of DN by incorporating a smoothing parameter leads to significantgains, in both CNNs and RNNs. We also find advantages in performance and stability by being ableto drive learning with higher learning rate in RNNs using DN. Finally, we demonstrate that adding anL1 regularizer on the activations before normalization is beneficial for all forms of normalization.2 R ELATED WORKIn this section we first review related work on normalization, followed by a brief description ofregularization in neural networks.2.1 N ORMALIZATIONNormalization of data prior to training has a long history in machine learning. For instance, localcontrast normalization used to be a standard effective tool in vision problems (Pinto et al., 2008;Jarrett et al., 2009; Sermanet et al., 2012; Le, 2013). However, until recently, normalization wasusually not part of the machine learning algorithm itself. Two notable exceptions are the originalAlexNet by Krizhevsky et al. (2012) which includes a divisive normalization step over a subset offeatures after ReLU at each pixel location, and the work by Jarrett et al. (2009) who demonstrated thata combination of nonlinearities, normalization and pooling improves object recognition in two-stagenetworks.Recently Ioffe & Szegedy (2015) demonstrated that standardizing the activations of the summedinputs of neurons over training batches can substantially decrease training time in deep neuralnetworks. To avoid covariate shift, where the weight gradients in one layer are highly dependenton previous layer outputs, Batch Normalization (BN) rescales the summed inputs according to theirvariances under the distribution of the mini-batch data. Specifically, if zj;ndenotes the activation ofa neuronjon example n, andB(n)denotes the mini-batch of examples that contains n, then BNcomputes an affine function of the activations standardized over each mini-batch:~zn;j=zn;jE[zj]q1jB(n)j(zn;jE[zj])2+E[zj] =1jB(n)jXm2B(n)zm;jHowever, training performance in Batch Normalization strongly depends on the quality of theaquired statistics and, therefore, the size of the mini-batch. Hence, Batch Normalization is harderto apply in cases for which the batch sizes are small, such as online learning or data parallelism.While classification networks can usually employ relatively larger mini-batches, other applicationssuch as image segmentation with convolutional nets use smaller batches and suffer from degradedperformance. Moreover, application to recurrent neural networks (RNNs) is not straightforward andleads to poor performance (Laurent et al., 2015).Several approaches have been proposed to make Batch Normalization applicable to RNNs. Cooijmanset al. (2016) and Liao & Poggio (2016) collect separate batch statistics for each time step. However,neither of this techniques address the problem of small batch sizes and it is unclear how to generalizethem to unseen time steps.More recently, Ba et al. (2016) proposed Layer Normalization (LN), where the activations arenormalized across all summed inputs within a layer instead of within a batch:~zn;j=zn;jE[zn]q1jL(j)j(zn;jE[zn])2+E[zn] =1jL(j)jXk2L(j)zn;kwhereL(j)contains all of the units in the same layer as j. While promising results have been shownon RNN benchmarks, direct application of layer normalization to convolutional layers often leads to2Published as a conference paper at ICLR 2017a degradation of performance. The authors hypothesize that since the statistics in convolutional layerscan vary quite a bit spatially, normalization with statistics from an entire layer might be suboptimal.Ulyanov et al. (2016) proposed to normalize each example on spatial dimensions but not on channeldimension, and was shown to be effective on image style transfer applications (Gatys et al., 2016).Liao et al. (2016a) proposed to accumulate the normalization statistics over the entire training phase,and showed that this can speed up training in recurrent and online learning without a deterioratingeffect on the performance. Since gradients cannot be backpropagated through this normalizationoperation, the authors use running statistics of the gradients instead.Exploring the normalization of weights instead of activations, Salimans & Kingma (2016) proposed areparametrization of the weights into a scale independent representation and demonstrated that thiscan speed up training time.Divisive Normalization (DN) on the other hand modulates the neural activity by the activity of a poolof neighboring neurons (Heeger, 1992; Bonds, 1989). DN is one of the most well studied and widelyfound transformations in real neural systems, and thus has been called a canonical computation ofthe brain (Carandini & Heeger, 2012). While the exact form of the transformation can differ, allformulations model the response of a neuron ~zjas a ratio between the acitivity in a summation fieldAj, and a norm-like function of the suppression field Bj~zj=Pzi2Ajuizi2+Pzk2Bjwkzpk1p; (1)wherefuigare the summation weights and fwkgthe suppression weights.Previous theoretical studies have outlined several potential computational roles for divisive normal-ization such as sensitivity maximization (Carandini & Heeger, 2012), invariant coding (Olsen et al.,2010), density modelling (Ball ́e et al., 2016), image compression (Malo et al., 2006), distributedneural representations (Simoncelli & Heeger, 1998), stimulus decoding (Ringach, 2009; Froudarakiset al., 2014), winner-take-all mechanisms (Busse et al., 2009), attention (Reynolds & Heeger, 2009),redundancy reduction (Schwartz & Simoncelli, 2001; Sinz & Bethge, 2008; Lyu & Simoncelli, 2008;Sinz & Bethge, 2013), marginalization in neural probabilistic population codes (Beck et al., 2011),and contextual modulations in neural populations and perception (Coen-Cagli et al., 2015; Schwartzet al., 2009).2.2 R EGULARIZATIONVarious regularization techniques have been applied to neural networks for the purpose of improvinggeneralization and reduce overfitting. They can be roughly divided into two categories, depending onwhether they regularize the weights or the activations.Regularization on Weights: The most common regularizer on weights is weight decay which justamounts to using the L2 norm squared of the weight vector. An L1 regularizer (Goodfellow et al.,2016) on the weights can also be adopted to push the learned weights to become sparse. Scardapaneet al. (2016) investigated mixed norms in order to promote group sparsity.Regularization on Activations: Sparsity or group sparsity regularizers on the activations haveshown to be effective in the past (Roz, 2008; Kavukcuoglu et al., 2009) and several regularizers havebeen proposed that act directly on the neural activations. Glorot et al. (2011) add a sparse regularizeron the activations after ReLU to encourage sparse representations. Dropout developed by Srivastavaet al. (2014) applies random masks to the activations in order to discourage them to co-adapt. DeCovproposed by Cogswell et al. (2015) tries to minimize the off-diagonal terms of the sample covariancematrix of activations, thus encouraging the activations to be as decorrelated as possible. Liao et al.(2016b) utilize a clustering-based regularizer to encourage the representations to be compact.3Published as a conference paper at ICLR 2017(a) Batch-Norm(b) Layer-Norm(c) Div-NormFigure 1: Illustration of different normalization schemes, in a CNN. Each HW-sized feature map is depictedas a rectangle; overlays depict instances in the set of Cfilters; and two examples from a mini-batch of size Nare shown, one above the other. The colors show the summation/suppression fields of each scheme.3 A U NIFIED FRAMEWORK FOR NORMALIZING NEURAL NETSWe first compare the three existing forms of normalization, and show that we can modify batchnormalization (BN) and layer normalization (LN) in small ways to make them have a form thatmatches divisive normalization (DN). We present a general formulation of normalization, whereexisting normalizations involve alternative schemes of accumulating information. Finally, we proposea regularization term that can be optimized jointly with these normalization schemes to encouragedecorrelation and/or improve generalization performance.3.1 G ENERAL FORM OF NORMALIZATIONWithout loss of generality, we denote the hidden input activation of one arbitrary layer in a deepneural network as z2RNL. HereNis the mini-batch size. In the case of a CNN, L=HWC,whereH;W are the height and width of the convolutional feature map and Cis the number of filters.For an RNN or fully-connected layers of a neural net, Lis the number of hidden units.Different normalization methods gather statistics from different ranges of the tensor and then performnormalization. Consider the following general form:zn;j=Xiwi;jxn;i+bj (2)vn;j=zn;jEAn;j[z] (3)~zn;j=vn;jp2+EBn;j[v2](4)whereAjandBjare subsets of zandvrespectively.AandBin standard divisive normalizationare referred to as summation and suppression fields (Carandini & Heeger, 2012). One can cast eachnormalization scheme into this general formulation, where the schemes vary based on how theydefine these two fields. These definitions are specified in Table 1. Optional parameters andcanbe added in the form of j~zn;j+jto increase the degree of freedom.Fig. 1 shows a visualization of the normalization field in a 4-D ConvNet tensor setting. Divisivenormalization happens within a local spatial window of neurons across filter channels. Here we setd(;)to be the spatial L1distance.3.2 N EWMODEL COMPONENTSSmoothing the Normalizers: One obvious way in which the normalization schemes differ is interms of the information that they combine for normalizing the activations. A second more subtlebut important difference between standard BN and LN as opposed to DN is the smoothing term ,in the denominator of Eq. (1). This term allows some control of the bias of the variance estimation,effectively smoothing the estimate. This is beneficial because divisive normalization does not utilizeinformation from the mini-batch as in BN, and combines information from a smaller field than LN. A4Published as a conference paper at ICLR 2017Model Range Normalizer BiasBNAn;j=fzm;j:m2[1;N];j2[1;H][1;W]gBn;j=fvm;j:m2[1;N];j2[1;H][1;W]g= 0LNAn;j=fzn;i:i2[1;L]g B n;j=fvn;i:i2[1;L]g = 0DNAn;j=fzn;i:d(i;j)RAg B n;j=fvn;i:d(i;j)RBg0Table 1: Different choices of the summation and suppression fields AandB, as well as the constant inthe normalizer lead to known normalization schemes in neural networks. d(i;j)denotes an arbitrary distancebetween two hidden units iandj, andRdenotes the neighbourhood radius.3 2 1 0 1 2 3Input0123OuputReLUDN+ReLU/uni00A0sigma=4.0DN+ReLU/uni00A0sigma=2.0DN+ReLU/uni00A0sigma=1.0DN+ReLU/uni00A0sigma=0.50 1 2 3 4Input 101234Input 2ReLU0 1 2 3 4Input 101234Input 2DN+ReLU0.00.51.01.52.02.53.03.54.00.00.20.40.60.81.01.2Figure 2: Divisive normalization followed by ReLU can be viewed as a new activation function. Left: Effectof varyingin this activation function. Right: Two units affect each other’s activation in the DN+ReLUformulation.similar but different denominator bias term max(;c)appears in (Jarrett et al., 2009), which is activewhen the activation variance is small. However, the clipping function makes the transformation notinvertible, losing scale information.Moreover, if we take the nonlinear activation function after normalization into consideration, we findthatwill change the overall properties of the non-linearity. To illustrate this effect, we use a simple1-layer network which consists of: two input units, one divisive normalization operator, followed bya ReLU activation function. If we fix one input unit to be 0.5, varying the other one with differentvalues ofproduces different output curves (Fig. 2, left). These curves exhibit different non-linearproperties compared to the standard ReLU. Allowing the other input unit to vary as well results indifferent activation functions of the first unit depending on the activity of the second (Fig. 2, right).This illustrates potential benefits of including this smoothing term , as it effectively modulates therectified response to vary from a linear to a highly saturated response.In this paper we propose modifications of the standard BN and LN which borrow this additive term in the denominator from DN. We study the effect of incorporating this smoother in the respectivenormalization schemes below.L1 regularizer: Filter responses on lower layers in deep neural networks can be quite correlatedwhich might impair the estimate of the variance in the normalizer. More independent representationshelp disentangle latent factors and boost the networks performance (Higgins et al., 2016). Empirically,we found that putting a sparse (L1) regularizerLL1=1NLXn;jjvn;jj (5)on the centered activations vn;jhelps decorrelate the filter responses (Fig. 5). Here, Nis the batchsize andLis the number of hidden units, and LL1is the regularization loss which is added to thetraining loss.A possible explanation for this effect is that the L1 regularizer might have a similar effect as maximumlikelihood estimation of an independent Laplace distribution. To see that, let pv(v)/exp (kvk1)andx=W1v, withWa full rank invertible matrix. Under this model px(x) =pv(Wx)jdetWj.5Published as a conference paper at ICLR 2017Then, minimization of the L1 norm of the activations under the volume-conserving constraint detA=const. corresponds to maximum likelihood on that model, which would encourage decorrelatedresponses. We do not enforce such a constraint, and the filter matrix might even not be invertible.However, the supervised loss function of the network benefits from having diverse non-zero filters.This encourages the network to not collapse filters along the same direction or put them to zero, andmight act as a relaxation of the volume-conserving constraint.3.3 S UMMARY OF NEW MODELSDN and DN*: We propose DN as a new local normalization scheme in neural networks. Inconvolutional layers, it operates on a local spatial window across filter channels, and in fully connectedlayers it operates on a slice of a hidden state vector. Additionally, DN* has a L1 regularizer on thepre-normalization centered activation ( vn;j).BN-s and BN*: To compare with DN and DN*, we also propose modifications to original BN: wedenote BN-s with 2in the denominator’s square root, and BN* with the L1 regularizer on top ofBN-s.LN-s and LN*: We apply the same changes as from BN to BN-s and BN*. In order to narrow thedifferences in the normalization schemes down to a few parameter choices, we additionally removethe affine transformation parameters andfrom LN such that the difference between LN* andDN* is only the size of the normalization field. andcan really be seen as a separate layer and inpractice we find that they do not improve the performance in the presence of 2.4 E XPERIMENTSWe evaluate the normalization schemes on three different tasks:CNN image classification: We apply different normalizations on CNNs trained on theCIFAR-10/100 datasets for image recognition, each of which contains 50,000 trainingimages and 10,000 test images. Each image is of size 32 323 and has been labeled anobject class out of 10 or 100 total number of classes.RNN language modeling: We apply different normalizations on RNNs trained on thePenn Treebank dataset for language modeling, containing 42,068 training sentences, 3,370validation sentences, and 3,761 test sentences.CNN image super-resolution: We train a CNN on low resolution images and learn cascadesof non-linear filters to smooth the upsampled images. We report performance of trainedCNN on the standard Set 14 and Berkeley 200 dataset.For each model, we perform a grid search of three or four choices of each hyperparameter includingthe smoothing constant , and L1 regularization constant , and learning rate on the validation set.4.1 CIFAR E XPERIMENTSWe used the standard CNN model provided in the Caffe library. The architecture is summarized inTable 2. We apply normalization before each ReLU function. We implement DN as a convolutionaloperator, fixing the local window size to 55,33,33for the three convolutional layers in allthe CIFAR experiments.We set the learning rate to 1e-3 and momentum 0.9 for all experiments. The learning rate schedule isset tof5K, 30K, 50Kgfor the baseline model and to f30K, 50K, 80Kgfor all other models. At everystage we multiply the learning rate by 0.1. Weights are randomly initialized from a zero-mean normaldistribution with standard deviation f1e-4, 1e-2, 1e-2gfor the convolutional layers, and f1e-1, 1e-1gfor fully connected layers. Input images are centered on the dataset image mean.Table 3 summarizes the test performances of BN*, LN* and DN*, compared to the performanceof a few baseline models and the standard batch and layer normalizations. We also add standardregularizers to the baseline model: L2 weight decay (WD) and dropout. Adding the smoothingconstant and L1 regularization consistently improves the classification performance, especially for6Published as a conference paper at ICLR 2017Table 2: CIFAR CNN specificationType Size Kernel Strideinput 32323 - -conv +relu 323232 55332 1max pool 161632 33 2conv +relu 161632 553232 1avg pool 8832 33 2conv +relu 8864 553264 1avg pool 4464 33 2fully conn. linear 64 - -fully conn. linear 10or100 - -Table 3: CIFAR-10/100 experimentsModel CIFAR-10 Acc. CIFAR-100 Acc.Baseline 0.7565 0.4409Baseline +WD +Dropout 0.7795 0.4179BN 0.7807 0.4814LN 0.7211 0.4249BN* 0.8179 0.5156LN* 0.8091 0.4957DN* 0.8122 0.5066the original LN. The modification of LN makes it now better than the original BN, and only slightlyworse than BN*. DN* achieves comparable performance to BN* on both datasets, but only relyingon a local neighborhood of hidden units.0 10 20Sigma0.00.20.40.60.81.0|x|CIFAR-10051015202530Layer Number0 10 20Sigma0.00.10.20.30.4|x|CIFAR-100051015202530Layer NumberFigure 3: Input scale (jxj) vs. learnedat each layer, color coded by thelayer number in ResNet-32, trainedon CIFAR-10 (left), and CIFAR-100(right).ResNet Experiments. Residual networks (ResNet) (Heet al., 2016), a type of CNN with residual connections be-tween layers, achieve impressive performance on many imageclassification benchmarks. The original architecture uses BNby default. If we remove BN, the architecture is very difficultto train or converges to a poor solution. We first reproduced theoriginal BN ResNet-32, obtaining 92.6% accuracy on CIFAR-10, and 69.8% on CIFAR-100. Our best DN model achieves91.3% and 66.6%, respectively. While this performance islower than the original BN-ResNet, there is certainly room toimprove as we have not performed any hyperparameter opti-mization. Importantly, the beneficial effects of sigma (2.5%gain on CIFAR-100) and the L1 regularizer (0.5%) are stillfound, even in the presence of other regularization techniquessuch as data augmentation and weight decay in the training.Since the number of sigma hyperparameters scales with thenumber of layers, we found that setting sigma as a learnableparameter for each layer helps the performance (1.3% gain onCIFAR-100). Note that training this parameter is not possiblein the formulation by Jarrett et al. (2009). The learned sigmashows a clear trend: it tends to decrease with depth, and in thelast convolution layer it approaches 0 (see Fig. 3).4.2 RNN EXPERIMENTSTo apply divisive normalization in fully connected layers ofRNNs, we consider a local neighborhood in the hidden state vector hjR:j+R, whereRis the radius7Published as a conference paper at ICLR 2017Table 4: PTB Word-level language modeling experimentsModel LSTM TanH RNN ReLU RNNBaseline 115.720 149.357 147.630BN 123.245 148.052 164.977LN 119.247 154.324 149.128BN* 116.920 129.155 138.947LN* 101.725 129.823 116.609DN* 102.238 123.652 117.868of the neighborhood. Although the hidden states are randomly initialized, this structure will imposelocal competition among the neighbors.vj=zj12R+ 1RXr=Rzj+r (6)~zj=vjq2+12R+1PRr=Rv2j+r(7)We follow Cooijmans et al. (2016)’s batch normalization implementation for RNNs: normalizersare separate for input transformation and hidden transformation. Let BN(),LN(),DN()beBatchNorm, LayerNorm and DivNorm, and gbe either tanh or ReLU.ht+1=g(Wxxt+Whht1+b) (8)h(BN)t+1=g(BN(Wxxt+bx) +BN(Whh(BN)t1+bh)) (9)h(LN)t+1=g(LN(Wxxt+Whh(LN)t1+b)) (10)h(DN )t+1=g(DN(Wxxt+Whh(DN )t1+b)) (11)Note that in recurrent BN, the additional parameters andare shared across timesteps whereas themoving averages of batch statistics are not shared. For the LSTM version, we followed the releasedimplementation from the authors of layer normalization1, and apply LN at the same places as BN andBN*, which is after the linear transformation of WxxandWhhindividually. For LN* and DN, wemodified the places of normalization to be at each non-linearity, instead of jointly with a concatenatedvector for different non-linearity. We found that this modification improves the performance andmakes the formulation clearer since normalization is always a combined operation with the activationfunction. We include details of the LSTM implementation in the Appendix.The RNN model is provided by the Tensorflow library (Abadi et al., 2016) and the LSTM version wasoriginally proposed in Zaremba et al. (2014). We used a two-layer stack-RNN of size 400 (vanillaRNN) or 200 (LSTM). Ris set to 60 (vanilla RNN) and 30 (LSTM). We tried both tanh and ReLU asthe activation function for the vanilla RNN. For unnormalized baselines and BN+ReLU, the initiallearning rate is set to 0.1 and decays by half every epoch, starting at the 5th epoch for a maximum of13 epochs. For the other normalized models, the initial learning rate is set to 1.0 while the schedule iskept the same. Standard stochastic gradient descent is used in all RNN experiments, with gradientclipping at 5.0.Table 4 shows the test set perplexity for LSTM models and vanilla models. Perplexity is defined asppl= exp(Pxlogp(x)). We find that BN and LN alone do not improve the final performancerelative to the baseline, but similar to what we see in the CNN experiments, our modified versionsBN* and LN* show significant improvements. BN* on RNN is outperformed by both LN* and DN.By applying our normalization, we can improve the vanilla RNN perplexity by 20%, comparable toan LSTM baseline with the same hidden dimension.1https://github.com/ryankiros/layer-norm8Published as a conference paper at ICLR 2017Table 5: Average test results of PSNR and SSIM on Set14 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.54 0.7733 26.01 0.7018A+ 29.13 0.8188 27.32 0.7491SRCNN 29.35 0.8212 27.53 0.7512BN 22.31 0.7530 21.40 0.6851DN* 29.38 0.8229 27.64 0.7562Table 6: Average test results of PSNR and SSIM on BSD200 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 27.19 0.7636 25.92 0.6952A+ 27.05 0.7945 25.51 0.7171SRCNN 28.42 0.8100 26.87 0.7378BN 21.89 0.7553 21.53 0.6741DN* 28.44 0.8110 26.96 0.74284.3 S UPER RESOLUTION EXPERIMENTSWe also evaluate DN on the low-level computer vision problem of single image super-resolution.We adopt the SRCNN model of Dong et al. (2016) as the baseline which consists of 3 convolutionallayers and 2 ReLUs. From bottom to top layers, the sizes of the filters are 9, 5, and 52. The numberof filters are 64, 32, and 1, respectively. All the filters are initialized with zero-mean Gaussian andstandard deviation 1e-3. Then we respectively apply batch normalization (BN) and our divisivenormalization with L1 regularization (DN*) to the convolutional feature maps before ReLUs. Weconstruct the training set in a similar manner as Dong et al. (2016) by randomly cropping 5 millionpatches (size 3333) from a subset of the ImageNet dataset of Deng et al. (2009). We only train ourmodel for 4 million iterations which is less than the one adopted by SRCNN, i.e., 15 million, as thegain of PSNR and SSIM by spending that long time is marginal.We report the average test results, utilizing the standard metrics PSNR and SSIM (Wang et al., 2004),on two standard test datasets Set14 (Zeyde et al., 2010) and BSD200 (Martin et al., 2001). Wecompare with two state-of-the-art single image super-resolution methods, A+ (Timofte et al., 2013)and SRCNN (Dong et al., 2016). All measures are computed on the Y channel of YCbCr color space.We also provide a visual comparison in Fig. 4.As show in Tables 5 and 6 DN* outperforms the strong competitor SRCNN, while BN does notperform well on this task. The reason may be that BN applies the same statistics to all patches ofone image which causes some overall intensity shift (see Figs. 4). From the visual comparisons, wecan see that our method not only enhances the resolution but also removes artifacts, e.g., the ringingeffect in Fig. 4.4.4 A BLATION STUDIES AND DISCUSSIONFinally, we investigated the differential effects of the 2term and the L1 regularizer on the perfor-mance. We ran ablation studies on CIFAR-10/100 as well as PTB experiments. The results are listedin Table 7.We find that adding the smoothing term 2and the L1 regularization consistently increases theperformance of the models. In the convolutional networks, we find that L1 and both have similareffects on the performance. L1 seems to be slightly more important. In recurrent networks, 2has amuch more dramatic effect on the performance than the L1 regularizer.Fig. 5 plots randomly sampled pairwise pre-normalization responses (after the linear transform)in the first layer at the same spatial location of the feature map, along with the average pair-wise2We use the setting of the best model out of all three SRCNN candidates.9Published as a conference paper at ICLR 2017PSNR 29.84dB PSNR 31.33dB PSNR 23.94dB PSNR 31.46dBPSNR 29.41dB PSNR 33.14dB PSNR 21.88dB PSNR 33.43dBPSNR 27.46dB(a) BicubicPSNR 30.12dB(b) SRCNNPSNR 23.91dB(c) BNPSNR 30.19dB(d) DN*Figure 4: Comparisons at a magnification factor of 4.correlation coefficient (Corr) and mutual information (MI). It is evident that both and L1 encouragesindependence of the learned linear filters.There are several factors that could explain the improvement in performance. As mentioned above,adding the L1 regularizer on the activations encourages the filter responses to be less correlated.This can increase the robustness of the variance estimate in the normalizer and lead to an improvedscaling of the responses to a good regime. Furthermore, adding the smoother to the denominatorin the normalizer can be seen as implicitly injecting zero mean noise on the activations. Whilenoise injection would not change the mean, it does add a term to the variance of the data, which isrepresented by 2. This term also makes the normalization equation invertible. While dividing bythe standard deviation decreases the degrees of freedom in the data, the smoothed normalizationequation is fully information preserving. Finally, DN type operations have been shown to decreasethe redundancy of filter responses to natural images and sound (Schwartz & Simoncelli, 2001; Sinz &Bethge, 2008; Lyu & Simoncelli, 2008). In combination with the L1 regularizer this could lead to amore independent representation of the data and thereby increase the performance of the network.5 C ONCLUSIONSWe have proposed a unified view of normalization techniques which contains batch and layernormalization as special cases. We have shown that when combined with a sparse regularizer onthe activations, our framework has significant benefits over standard normalization techniques. Wehave demonstrated this in the context of both convolutional neural nets as well as recurrent neuralnetworks. In the future we plan to explore other regularization techniques such as group sparsity. Wealso plan to conduct a more in-depth analysis of the effects of normalization on the correlations ofthe learned representations.10Published as a conference paper at ICLR 2017Table 7: Comparison of standard batch and layer normalation (BN and LN) models, to those with only L1regularizer (+L1), only the smoothing term (-s), and with both (*). We also compare divisive normalizationwith both (DN*), versus with only the smoothing term (DN).Model CIFAR-10 CIFAR-100 LSTM Tanh RNN ReLU RNNBaseline 0.7565 0.4409 115.720 149.357 147.630Baseline +L1 0.7839 0.4517 111.885 143.965 148.572BN 0.7807 0.4814 123.245 148.052 164.977BN +L1 0.8067 0.5100 123.736 152.777 166.658BN-s 0.8017 0.5005 123.243 131.719 139.159BN* 0.8179 0.5156 116.920 129.155 138.947LN 0.7211 0.4249 119.247 154.324 149.128LN +L1 0.7994 0.4990 116.964 152.100 147.937LN-s 0.8083 0.4863 102.492 133.812 118.786LN* 0.8091 0.4957 101.725 129.823 116.609DN 0.8058 0.4892 103.714 132.143 118.789DN* 0.8122 0.5066 102.238 123.652 117.868BaselineCorr. 0.19MI 0.37BNCorr. 0.43MI 1.20BN +L1Corr. 0.17MI 0.66BN-SCorr. 0.23MI 0.80BN*Corr. 0.17MI 0.66LNCorr. 0.55MI 1.41LN +L1Corr. 0.17MI 0.67LN-SCorr. 0.20MI 0.74LN*Corr. 0.16MI 0.64DNCorr. 0.21MI 0.81DN*Corr. 0.20MI 0.73Figure 5: First layer CNN pre-normalized activation joint histogramAcknowledgements RL is supported by Connaught International Scholarships. FS would like tothank Edgar Y . Walker, Shuang Li, Andreas Tolias and Alex Ecker for helpful discussions. Supportedby the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/InteriorBusiness Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized toreproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotationthereon. Disclaimer: The views and conclusions contained herein are those of the authors and shouldnot be interpreted as necessarily representing the official policies or endorsements, either expressedor implied, of IARPA, DoI/IBC, or the U.S. Government.11Published as a conference paper at ICLR 2017REFERENCESSparse coding via thresholding and local competition in neural circuits. Neural Computation , 20(10):2526–63, 2008. ISSN 08997667. doi: 10.1162/neco.2008.03-07-486.Abadi, Mart ́ın, Barham, Paul, Chen, Jianmin, Chen, Zhifeng, Davis, Andy, Dean, Jeffrey, Devin,Matthieu, Ghemawat, Sanjay, Irving, Geoffrey, Isard, Michael, Kudlur, Manjunath, Levenberg,Josh, Monga, Rajat, Moore, Sherry, Murray, Derek Gordon, Steiner, Benoit, Tucker, Paul A.,Vasudevan, Vijay, Warden, Pete, Wicke, Martin, Yu, Yuan, and Zhang, Xiaoqiang. Tensorflow: Asystem for large-scale machine learning. CoRR , abs/1605.08695, 2016.Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. CoRR ,abs/1607.06450, 2016.Ball ́e, Johannes, Laparra, Valero, and Simoncelli, Eero P. Density modeling of images using ageneralized normalization transformation. ICLR , 2016.Beck, J. M., Latham, P. E., and Pouget, A. Marginalization in Neural Circuits with DivisiveNormalization. The Journal of neuroscience : the official journal of the Society for Neuroscience ,31(43):15310–9, oct 2011. ISSN 1529-2401. doi: 10.1523/JNEUROSCI.1706-11.2011.Bevilacqua, Marco, Roumy, Aline, Guillemot, Christine, and Morel, Marie-Line Alberi. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC ,2012.Bonds, A. B. Role of Inhibition in the Specification of Orientation Selectivity of Cells in the CatStriate Cortex. Visual Neuroscience , 2(01):41–55, 1989.Busse, L., Wade, A. R., and Carandini, M. Representation of Concurrent Stimuli by PopulationActivity in Visual Cortex. Neuron , 64(6):931–942, dec 2009. ISSN 0896-6273. doi: 10.1016/j.neuron.2009.11.004.Carandini, M. and Heeger, D. J. Normalization as a canonical neural computation. Nature reviews.Neuroscience , 13(1):51–62, nov 2012. ISSN 1471-0048. doi: 10.1038/nrn3136.Coen-Cagli, R., Kohn, A., and Schwartz, O. Flexible gating of contextual influences in natural vision.Nature Neuroscience , 18(11):1648–1655, 2015. ISSN 1097-6256. doi: 10.1038/nn.4128.Cogswell, Michael, Ahmed, Faruk, Girshick, Ross, Zitnick, Larry, and Batra, Dhruv. Reducingoverfitting in deep networks by decorrelating representations. ICLR , 2015.Cooijmans, Tim, Ballas, Nicolas, Laurent, C ́esar, and Courville, Aaron. Recurrent batch normaliza-tion. CoRR , abs/1603.09025, 2016.Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scalehierarchical image database. In CVPR , 2009.Dong, Chao, Loy, Chen Change, He, Kaiming, and Tang, Xiaoou. Image super-resolution using deepconvolutional networks. TPAMI , 38(2):295–307, 2016.Froudarakis, Emmanouil, Berens, Philipp, Ecker, Alexander S, Cotton, R James, Sinz, Fabian H,Yatsenko, Dimitri, Saggau, Peter, Bethge, Matthias, and Tolias, Andreas S. Population code inmouse V1 facilitates readout of natural scenes through increased sparseness. Nature neuroscience ,17(6):851–7, apr 2014. ISSN 1546-1726. doi: 10.1038/nn.3707.Gatys, Leon A., Ecker, Alexander S., and Bethge, Matthias. Image style transfer using convolutionalneural networks. In CVPR , 2016.Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. InAISTATS , 2011.Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. Book in preparation for MITPress, 2016.12Published as a conference paper at ICLR 2017He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for imagerecognition. In CVPR , 2016.Heeger, D. J. Normalization of cell responses in cat striate cortex. Vis Neurosci , 9(2):181–197, 1992.ISSN 09525238.Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A.Early Visual Concept Learning with Unsupervised Deep Learning. CoRR , abs/1606.05579, 2016.Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training byreducing internal covariate shift. In ICML , 2015.Jarrett, K., Kavukcuoglu, K., Ranzato, M. A., and LeCun, Y . What is the best multi-stage architecturefor object recognition? ICCV , 2009.Kavukcuoglu, K., Ranzato, M.’A., Fergus, R., and LeCun, Y . Learning invariant features throughtopographic filter maps. In CVPR Workshops , 2009.Krizhevsky, A., Sutskever, I., and Hinton, G. E. ImageNet Classification with Deep ConvolutionalNeural Networks. NIPS , 2012.Laurent, C ́esar, Pereyra, Gabriel, Brakel, Phil ́emon, Zhang, Ying, and Bengio, Yoshua. Batchnormalized recurrent neural networks. arXiv preprint arXiv:1510.01378 , 2015.Le, Quoc V . Building high-level features using large scale unsupervised learning. In 2013 IEEEinternational conference on acoustics, speech and signal processing , pp. 8595–8598. IEEE, 2013.Liao, Q. and Poggio, T. Bridging the Gaps Between Residual Learning, Recurrent Neural Networksand Visual Cortex. CoRR , abs/1604.03640, 2016.Liao, Qianli, Kawaguchi, Kenji, and Poggio, Tomaso. Streaming Normalization: Towards Simplerand More Biologically-plausible Normalizations for Online and Recurrent Learning. CoRR ,abs/1610.06160, 2016a.Liao, Renjie, Schwing, Alexander, Zemel, Richard, and Urtasun, Raquel. Learning deep parsimoniousrepresentations. NIPS , 2016b.Lyu, Siwei and Simoncelli, Eero P. Reducing statistical dependencies in natural signals using radialGaussianization. NIPS , 2008.Malo, J., Epifanio, I., Navarro, R., and Simoncelli, E. P. Nonlinear image representation for efficientperceptual coding. TIP, 15(1):68–80, 2006.Martin, David, Fowlkes, Charless, Tal, Doron, and Malik, Jitendra. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In ICCV , 2001.Olsen, S. R, Bhandawat, V ., and Wilson, R. I. Divisive Normalization in Olfactory Population Codes.Neuron , 66(2):287–299, 2010. ISSN 10974199. doi: 10.1016/j.neuron.2010.04.009.Pinto, N., Cox, D. D., and DiCarlo, J. J. Why is Real-World Visual Object Recognition Hard? PLoSComput Biol , 4(1):e27, jan 2008. doi: 10.1371/journal.pcbi.0040027.Reynolds, J. H. and Heeger, D. J. The normalization model of attention. Neuron , 61(2):168–85, jan2009. ISSN 1097-4199. doi: 10.1016/j.neuron.2009.01.002.Ringach, D. L. Population coding under normalization. Vision Research , 50(22):2223–2232, 2009.ISSN 18785646. doi: 10.1016/j.visres.2009.12.007.Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization toaccelerate training of deep neural networks. In NIPS , 2016.Scardapane, S., Comminiello, D., Hussain, A., and Uncin, A. Group sparse regularization for deepneural networks. CoRR , abs/1607.00485, 2016.13Published as a conference paper at ICLR 2017Schwartz, O. and Simoncelli, E. P. Natural signal statistics and sensory gain control. Nat Neurosci , 4(8):819–825, 2001. ISSN 1097-6256. doi: 10.1038/90526.Schwartz, O., J., Sejnowski T., and P., Dayan. Perceptual organization in the tilt illusion. Journal ofVision , 9(4):1–20, apr 2009. ISSN 1534-7362.Sermanet, P., Chintala, S., and LeCun, Y . Convolutional neural networks applied to house numbersdigit classification. Proceedings of International Conference on Pattern Recognition ICPR12 ,(Icpr):10–13, 2012. ISSN 1051-4651. doi: 10.0/Linux-x86 64.Simoncelli, E. P. and Heeger, D. J. A model of neuronal responses in visual area MT. Vision Research ,38(5):743–761, 1998.Sinz, Fabian and Bethge, Matthias. Temporal Adaptation Enhances Efficient Contrast Gain Controlon Natural Images. PLoS Computational Biology , 9(1):e1002889, jan 2013. ISSN 1553734X.Sinz, Fabian H and Bethge, Matthias. The Conjoint Effect of Divisive Normalization and OrientationSelectivity on Redundancy Reduction. In NIPS , 2008.Srivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan.Dropout: a simple way to prevent neural networks from overfitting. JMLR , 15(1):1929–1958,2014.Timofte, Radu, De Smet, Vincent, and Van Gool, Luc. Anchored neighborhood regression for fastexample-based super-resolution. In ICCV , 2013.Ulyanov, Dmitry, Vedaldi, Andrea, and Lempitsky, Victor S. Instance normalization: The missingingredient for fast stylization. CoRR , abs/1607.08022, 2016.Wang, Zhou, Bovik, Alan C, Sheikh, Hamid R, and Simoncelli, Eero P. Image quality assessment:from error visibility to structural similarity. TIP, 13(4):600–612, 2004.Zaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization.CoRR , abs/1409.2329, 2014.Zeyde, Roman, Elad, Michael, and Protter, Matan. On single image scale-up using sparse-representations. In International conference on curves and surfaces , pp. 711–730. Springer,2010.14Published as a conference paper at ICLR 2017A E FFECT OF SIGMA AND L1ONCIFAR-10/100 VALIDATION SETWe plot the effect of and L1 regularization on the validation performance in Figure 6. While sigmamakes the most contributions to the improvement, L1 also provides much gain for the original versionof LN and BN.101100Sigma0.680.700.720.740.760.780.800.82CIFAR-10BaselineBNBN_sLNLN_sDN(a)101100Sigma0.380.400.420.440.460.480.50CIFAR-100BaselineBNBN_sLNLN_sDN (b)104103102L10.700.720.740.760.780.800.82CIFAR-10Baseline +L1BN +L1BN*LN +L1LN*DN* (c)104103102L10.400.420.440.460.480.50CIFAR-100Baseline +L1BN +L1BN*LN +L1LN*DN* (d)Figure 6: Validation accuracy on CIFAR-10/100 showing effect of sigma constant (a, b) and L1 regularization(c, d) on BN, LN, and DNB LSTM I MPLEMENTATION DETAILSIn LSTM experiments, we found that have an individual normalizer for each non-linearity (sigmoidand tanh) helps the performance for both LN and DN. Eq. 12-14 are the standard LSTM equations,and letNbe the normalizer function, our new normalizer is replacing the nonlinearity with Eq. 15-16.This modification can also be thought as combining normalization and activation as a single activationfunction.This is different from the released implementation of LN and BN in LSTM, which separatelynormalized the concatenated vector Whht1andWxxt. For all LN* and DN experiments we choosethis new formulation, whereas LN experiments are consistent with the released version.0B@ftitotgt1CA=Whht1+Wxxt+b (12)ct=(ft)ct1+(it)tanh( gt) (13)ht=(ot)tanh( ct) (14)(x) =(N(x)) (15)tanh(x) = tanh( N(x)) (16)C M ORE RESULTS ON IMAGE SUPER -RESOLUTIONWe include results on another standard dataset Set5 Bevilacqua et al. (2012) in Table 8 and showmore visual results in Fig. 7.15Published as a conference paper at ICLR 2017Table 8: Average test results of PSNR and SSIM on Set5 Dataset.Model PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4)Bicubic 30.41 0.8678 28.44 0.8097A+ 32.59 0.9088 30.28 0.8603SRCNN 32.83 0.9087 30.52 0.8621BN 22.85 0.8027 20.71 0.7623DN* 32.83 0.9106 30.62 0.8665PSNR 21.69dB PSNR 22.62dB PSNR 20.06dB PSNR 22.69dBPSNR 31.55dB(a) BicubicPSNR 32.29dB(b) SRCNNPSNR 19.39dB(c) BNPSNR 32.31dB(d) DN*Figure 7: Comparisons at a magnification factor of 4.16
ryxy7d84e
ByZvfijeg
ICLR.cc/2017/conference/-/paper568/official/review
{"title": "Interesting idea, but not ready yet", "rating": "4: Ok but not good enough - rejection", "review": "The authors of the paper explore the idea of incorporating skip connections *over time* for RNNs. Even though the basic idea is not particularly innovative, a few proposals on how to merge that information into the current hidden state with different pooling functions are evaluated. The different models are compared on two popular text benchmarks.\n\nSome points.\n\n1) The experiments feature only NLP and only prediction tasks. It would have been nice to see the models in other domains, i.e. modelling a conditional distribution p(y|x), not only p(x). Further, sensory input data such as audio or video would have given further insight.\n\n2) As pointed out by other reviewers, it does not feel as if the comparisons to other models are fair. SOTA on NLP changes quickly and it is hard to place the experiments in the complete picture.\n\n3) It is claimed that this helps long-term prediction. I think the paper lacks a corresponding analysis, as pointed out in an earlier question of mine.\n\n4) It is claimed that LSTM trains slow and is hard to scale. For one does this not match my personal experience. Then, the prevalence of LSTM systems in production systems (e.g. Google, Baidu, Microsoft, \u2026) clearly speaks against this.\n\n\nI like the basic idea of the paper, but the points above make me think it is not ready for publication.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Higher Order Recurrent Neural Networks
["Rohollah Soltani", "Hui Jiang"]
In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
["Deep learning", "Natural language processing"]
https://openreview.net/forum?id=ByZvfijeg
https://openreview.net/pdf?id=ByZvfijeg
https://openreview.net/forum?id=ByZvfijeg&noteId=ryxy7d84e
Under review as a conference paper at ICLR 2017HIGHER ORDER RECURRENT NEURAL NETWORKSRohollah Soltani & Hui JiangDepartment of Computer Science and EngineeringYork UniversityToronto, CAfrsoltani,hjg@cse.yorku.caABSTRACTIn this paper, we study novel neural network structures to better model long termdependency in sequential data. We propose to use more memory units to keeptrack of more preceding states in recurrent neural networks (RNNs), which are allrecurrently fed to the hidden layers as feedback through different weighted paths.By extending the popular recurrent structure in RNNs, we provide the models withbetter short-term memory mechanism to learn long term dependency in sequences.Analogous to digital filters in signal processing, we call these structures as higherorder RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned usingthe back-propagation through time method. HORNNs are generally applicable toa variety of sequence modelling tasks. In this work, we have examined HORNNsfor the language modeling task using two popular data sets, namely the Penn Tree-bank (PTB) and English text8. Experimental results have shown that the proposedHORNNs yield the state-of-the-art performance on both data sets, significantlyoutperforming the regular RNNs as well as the popular LSTMs.1 I NTRODUCTIONIn the recent resurgence of neural networks in deep learning, deep neural networks have achievedsuccesses in various real-world applications, such as speech recognition, computer vision and naturallanguage processing. Deep neural networks (DNNs) with a deep architecture of multiple nonlinearlayers are an expressive model that can learn complex features and patterns in data. Each layer ofDNNs learns a representation and transfers them to the next layer and the next layer may continueto extract more complicated features, and finally the last layer generates the desirable output. Fromearly theoretical work, it is well known that neural networks may be used as the universal approx-imators to map from any fixed-size input to another fixed-size output. Recently, more and moreempirical results have demonstrated that the deep structure in DNNs is not just powerful in theorybut also can be reliably learned in practice from a large amount of training data.Sequential modeling is a challenging problem in machine learning, which has been extensively stud-ied in the past. Recently, many deep neural network based models have been successful in this area,as shown in various tasks such as language modeling Mikolov (2012), sequence generation Graves(2013); Sutskever et al. (2011), machine translation Sutskever et al. (2014) and speech recognitionGraves et al. (2013). Among various neural network models, recurrent neural networks (RNNs) areappealing for modeling sequential data because they can capture long term dependency in sequentialdata using a simple mechanism of recurrent feedback. RNNs can learn to model sequential data overan extended period of time, then carry out rather complicated transformations on the sequential data.RNNs have been theoretically proved to be a turing complete machine Siegelmann & Sontag (1995).RNNs in principle can learn to map from one variable-length sequence to another. When unfoldedin time, RNNs are equivalent to very deep neural networks that share model parameters and receivethe input at each time step. The recursion in the hidden layer of RNNs can act as an excellent mem-ory mechanism for the networks. In each time step, the learned recursion weights may decide whatinformation to discard and what information to keep in order to relay onwards along time. WhileRNNs are theoretically powerful, the learning of RNNs needs to use the back-propagation throughtime (BPTT) method Werbos (1990) due to the internal recurrent cycles. Unfortunately, in practice,it turns out to be rather difficult to train RNNs to capture long-term dependency due to the fact that1Under review as a conference paper at ICLR 2017the gradients in BPTT tend to either vanish or explode Bengio et al. (1994). Many heuristic meth-ods have been proposed to solve these problems. For example, a simple method, called gradientclipping , is used to avoid gradient explosion Mikolov (2012). However, RNNs still suffer from thevanishing gradient problem since the gradients decay gradually as they are back-propagated throughtime. As a result, some new recurrent structures are proposed, such as long short-term memory(LSTM) Hochreiter & Schmidhuber (1997) and gated recurrent unit (GRU) Cho et al. (2014). Thesemodels use some learnable gates to implement rather complicated feedback structures, which en-sure that some feedback paths can allow the gradients to flow back in time effectively. These modelshave given promising results in many practical applications, such as sequence modeling Graves(2013), language modeling Sundermeyer et al. (2012), hand-written character recognition Liwickiet al. (2012), machine translation Cho et al. (2014), speech recognition Graves et al. (2013).In this paper, we explore an alternative method to learn recurrent neural networks (RNNs) to modellong term dependency in sequential data. We propose to use more memory units to keep track ofmore preceding RNN states, which are all recurrently fed to the hidden layers as feedback throughdifferent weighted paths. Analogous to digital filters in signal processing, we call these new re-current structures as higher order recurrent neural networks (HORNNs). At each time step, theproposed HORNNs directly combine multiple preceding hidden states from various history timesteps, weighted by different matrices, to generate the feedback signal to each hidden layer. By ag-gregating more history information of the RNN states, HORNNs are provided with better short-termmemory mechanism than the regular RNNs. Moreover, those direct connections to more previousRNN states allow the gradients to flow back smoothly in the BPTT learning stage. All of theseensure that HORNNs can be more effectively learned to capture long term dependency. Similar toRNNs and LSTMs, the proposed HORNNs are general enough for variety of sequential modelingtasks. In this work, we have evaluated HORNNs for the language modeling task on two popular datasets, namely the Penn Treebank (PTB) and English text8 sets. Experimental results have shown thatHORNNs yield the state-of-the-art performance on both data sets, significantly outperforming theregular RNNs as well as the popular LSTMs.2 R ELATED WORKHierarchical recurrent neural network proposed in Hihi & Bengio (1996) is one of the earliest papersthat attempt to improve RNNs to capture long term dependency in a better way. It proposes to addlinear time delayed connections to RNNs to improve the gradient descent learning algorithm to finda better solution, eventually solving the gradient vanishing problem. However, in this early work,the idea of multi-resolution recurrent architectures has only been preliminarily examined for somesimple small-scale tasks. This work is somehow relevant to our work in this paper but the higherorder RNNs proposed here differs in several aspects. Firstly, we propose to use weighted connectionsin the structure, instead of simple multi-resolution short-cut paths. This makes our models fall intothe category of higher order models. Secondly, we have proposed to use various pooling functionsin generating the feedback signals, which is critical in normalizing the dynamic ranges of gradientsflowing from various paths. Our experiments have shown that the success of our models is largelyattributed to this technique.The most successful approach to deal with vanishing gradients so far is to use long short termmemory (LSTM) model Hochreiter & Schmidhuber (1997). LSTM relies on a fairly sophisticatedstructure made of gates to control flow of information to the hidden neurons. The drawback of theLSTM is that it is complicated and slow to learn. The complexity of this model makes the learningvery time consuming, and hard to scale for larger tasks. Another approach to address this issue isto add a hidden layer to RNNs Mikolov et al. (2014). This layer is responsible for capturing longerterm dependencies in input data by making its weight matrix close to identity. Recently, clock-work RNNs Koutnik et al. (2014) are proposed to address this problem as well, which splits eachhidden layer into several modules running at different clocks. Each module receives signals frominput and computes its output at a predefined clock rate. Gated feedback recurrent neural networksChung et al. (2015) attempt to implement a generalized version using the gated feedback connectionbetween layers of stacked RNNs, allowing the model to adaptively adjust the connection betweenconsecutive hidden layers.2Under review as a conference paper at ICLR 2017Besides, short-cut skipping connections were considered earlier in Wermter (1992), and more re-cently have been found useful in learning very deep feed-forward neural networks as well, such asLee et al. (2014); He et al. (2015). These skipping connections between various layers of neuralnetworks can improve the flow of information in both forward and backward passes. Among them,highway networks Srivastava et al. (2015) introduce rather sophisticated skipping connections be-tween layers, controlled by some gated functions.3 H IGHER ORDER RECURRENT NEURAL NETWORKSA recurrent neural network (RNN) is a type of neural network suitable for modeling a sequence ofarbitrary length. At each time step t, an RNN receives an input xt, the state of the RNN is updatedrecursively as follows (as shown in the left part of Figure 1):ht=f(Winxt+Whht1) (1)wheref()is an nonlinear activation function, such as sigmoid or rectified linear (ReLU), and Winis the weight matrix in the input layer and Whis the state to state recurrent weight matrix. Due tothe recursion, this hidden layer may act as a short-term memory of all previous input data.Given the state of the RNN, i.e., the current activation signals in the hidden layer ht, the RNNgenerates the output according to the following equation:yt=g(Woutht) (2)whereg()denotes the softmax function and Woutis the weight matrix in the output layer. In prin-ciple, this model can be trained using the back-propagation through time (BPTT) algorithm Wer-bos (1990). This model has been used widely in sequence modeling tasks like language modelingMikolov (2012).Figure 1: Comparison of model structures between an RNN (1st order) and a higher order RNN (3rdorder). The symbol z1denotes a time-delay unit (equivalent to a memory unit).3.1 H IGHER ORDER RNN S(HORNN S)RNNs are very deep in time and the hidden layer at each time step represents the entire input history,which acts as a short-term memory mechanism. However, due to the gradient vanishing problem inback-propagation, it turns out to be very difficult to learn RNNs to model long-term dependency insequential data.In this paper, we extend the standard RNN structure to better model long-term dependency in se-quential data. As shown in the right part of Figure 1, instead of using only the previous RNN state asthe feedback signal, we propose to employ multiple memory units to generate the feedback signal ateach time step by directly combining multiple preceding RNN states in the past, where these time-delayed RNN states go through separate feedback paths with different weight matrices. Analogousto the filter structures used in signal processing, we call this new recurrent structure as higher orderRNNs , HORNNs in short. The order of HORNNs depends on the number of memory units used forfeedback. For example, the model used in the right of Figure 1 is a 3rd-order HORNN. On the otherhand, regular RNNs may be viewed as 1st-order HORNNs.3Under review as a conference paper at ICLR 2017In HORNNs, the feedback signal is generated by combining multiple preceding RNN states. There-fore, the state of an N-th order HORNN is recursively updated as follows:ht=f Winxt+NXn=1Whnhtn!(3)wherefWhnjn= 1;Ngdenotes the weight matrices used for various feedback paths. Similar toFigure 2: Unfolding a 3rd-order HORNN Figure 3: Illustration of all back-propagationpaths in BPTT for a 3rd-order HORNN.RNNs, HORNNs can also be unfolded in time to get rid of the recurrent cycles. As shown in Figure2, we unfold a 3rd-order HORNN in time, which clearly shows that each HORNN state is explicitlydecided by the current input xtand all previous 3 states in the past. This structure looks similar tothe skipping short-cut paths in deep neural networks but each path in HORNNs maintains a learnableweight matrix. The new structure in HORNNs can significantly improve the model capacity to cap-ture long-term dependency in sequential data. At each time step, by explicitly aggregating multiplepreceding hidden activities, HORNNs may derive a good representation of the history informationin sequences, leading to a significantly enhanced short-term memory mechanism.During the backprop learning procedure, these skipping paths directly connected to more previoushidden states of HORNNs may allow the gradients to flow more easily back in time, which even-tually leads to a more effective learning of models to capture long term dependency in sequences.Therefore, this structure may help to largely alleviate the notorious problem of vanishing gradientsin the RNN learning.Obviously, HORNNs can be learned using the same BPTT algorithm as regular RNNs, except thatthe error signals at each time step need to be back-propagated to multiple feedback paths in thenetwork. As shown in Figure 3, for a 3rd-order HORNN, at each time step t, the error signal fromthe hidden layer htwill have to be back-propagated into four different paths: i) the first one back tothe input layer, xt; ii) three more feedback paths leading to three different histories in time scales,namely ht1,ht2andht3.Interestingly enough, if we use a fully-unfolded implementation for HORNNs as in Figure 2, theoverall computation complexity is comparable with regular RNNs. Given a whole sequence, we mayfirst simultaneously compute all hidden activities (from xttohtfor allt). Secondly, we recursivelyupdate htfor alltusing eq.(3). Finally, we use GPUs to compute all outputs together from theupdated hidden states (from httoytfor allt) based on eq.(2). The backward pass in learningcan also be implemented in the same three-step procedure. Except the recursive updates in thesecond step (this issue remains the same in regular RNNs), all remaining computation steps canbe formulated as large matrix multiplications. As a result, the computation of HORNNs can beimplemented fairly efficiently using GPUs.3.2 P OOLING FUNCTIONS FOR HORNN SAs discussed above, the shortcut paths in HORNNs may help the models to capture long-term de-pendency in sequential data. On the other hand, they may also complicate the learning in a differentway. Due to different numbers of hidden layers along various paths, the signals flowing from differ-ent paths may vary dramatically in the dynamic range. For example, in the forward pass in Figure2, three different feedback signals from different time scales, e.g. ht1,ht2andht3, flow into4Under review as a conference paper at ICLR 2017the hidden layer to compute the new hidden state ht. The dynamic range of these signals may varydramatically from case to case. The situation may get even worse in the backward pass during theBPTT learning. For example, in a 3rd-order HORNN in Figure 2, the node ht3may directly re-ceive an error signal from the node ht. In some cases, it may get so strong as to overshadow othererror signals coming from closer neighbours of ht1andht2. This may impede the learning ofHORNNs, yielding slow convergence or even poor performance.Here, we have proposed to use some pooling functions to calibrate the signals from different feed-back paths before they are used to recursively generate a new hidden state, as shown in Figure 4.In the following, we will investigate three different choices for the pooling function in Figure 4,including max-based pooling, FOFE-based pooling and gated pooling.3.2.1 M AX-BASED POOLINGMax-based pooling is a simple strategy that chooses the most responsive unit (exhibiting the largestactivation value) among various paths to transfer to the hidden layer to generate the new hiddenstate. Many biological experiments have shown that biological neuron networks tend to use a similarstrategy in learning and firing.In this case, instead of using eq.(3), we use the following formula to update the hidden state ofHORNNs:ht=fWinxt+ maxNn=1(Whnhtn)(4)where maximization is performed element-wisely to choose the maximum value in each dimensionto feed to the hidden layer to generate the new hidden state. The aim here is to capture the mostrelevant feature and map it to a fixed predefined size.The max pooling function is simple and biologically inspired. However, the max pooling strategyalso has some serious disadvantages. For example, it has no forgetting mechanism and the signalsmay get stronger and stronger. Furthermore, it loses the order information of the preceding historiessince it only choose the maximum values but it does not know where the maximum comes from.Figure 4: A pooling function is used to calibratevarious feedback paths in HORNNs.Figure 5: Gated HORNNs use learnable gates tocombine various feedback signals.3.2.2 FOFE- BASED POOLINGThe fixed-size ordinally-forgetting encoding (FOFE) method was proposed in Zhang et al. (2015)to encode any variable-length sequence of data into a fixed-size representation. In FOFE, a singleforgetting factor (0< < 1) is used to encode the position information in sequences basedon the idea of exponential forgetting to derive invertible fixed-size representations. In this work,we borrow this simple idea of exponential forgetting to calibrate all preceding histories using apre-selected forgetting factor as follows:ht=f Winxt+NXn=1nWhnhtn!(5)where the forgetting factor is manually pre-selected between 0< < 1. The above constantcoefficients related to play an important role in calibrating signals from different paths in both5Under review as a conference paper at ICLR 2017forward and backward passes of HORNNs since they slightly underweight the older history over therecent one in an explicit way.3.2.3 G ATED HORNN SIn this section, we follow the ideas of the learnable gates in LSTMs Hochreiter & Schmidhuber(1997) and GRUs Cho et al. (2014) as well as the recent soft-attention in Bahdanau et al. (2014).Instead of using constant coefficients derived from a forgetting factor, we may let the network auto-matically determine the combination weights based on the current state and input. In this case, wemay use sigmoid gates to compute combination weights to regulate the information flowing fromvarious feedback paths. The sigmoid gates take the current data and previous hidden state as inputto decide how to weight all of the precede hidden states. The gate function weights how the currenthidden state is generated based on all the previous time-steps of the hidden layer. This allows thenetwork to potentially remember information for a longer period of time. In a gated HORNN, thehidden state is recursively computed as follows:ht=f Winxt+NXn=1rnWhnhtn!(6)wheredenotes element-wise multiplication of two equally-sized vectors, and the gate signal rniscalculated asrn=(Wg1nxt+Wg2nhtn) (7)where()is the sigmoid function, and Wg1nandWg2ndenote two weight matrices introduced foreach gate.Note that the computation complexity of gated HORNNs is comparable with LSTMs and GRUs,significantly exceeding the other HORNN structures because of the overhead from the gate functionsin eq. (7).4 E XPERIMENTSIn this section, we evaluate the proposed higher order RNNs (HORNNs) on several language model-ing tasks. A statistical language model (LM) is a probability distribution over sequences of words innatural languages. Recently, neural networks have been successfully applied to language modelingBengio et al. (2003); Mikolov et al. (2011), yielding the state-of-the-art performance. In languagemodeling tasks, it is quite important to take advantage of the long-term dependency of natural lan-guages. Therefore, it is widely reported that RNN based LMs can outperform feedforward neuralnetworks in language modeling tasks. We have chosen two popular LM data sets, namely the PennTreebank (PTB) and English text8 sets, to compare our proposed HORNNs with traditional n-gramLMs, RNN-based LMs and the state-of-the-art performance obtained by LSTMs Graves (2013);Mikolov et al. (2014), FOFE based feedforward NNs Zhang et al. (2015) and memory networksSukhbaatar et al. (2015).In our experiments, we use the mini-batch stochastic gradient decent (SGD) algorithm to train allneural networks. The number of back-propagation through time (BPTT) steps is set to 30 for allrecurrent models. Each model update is conducted using a mini-batch of 20 subsequences, eachof which is of 30 in length. All model parameters (weight matrices in all layers) are randomlyinitialized based on a Gaussian distribution with zero mean and standard deviation of 0.1. A hardclipping is set to 5.0 to avoid gradient explosion during the BPTT learning. The initial learning rateis set to 0.5 and we halve the learning rate at the end of each epoch if the cross entropy functionon the validation set does not decrease. We have used the weight decay, momentum and columnnormalization Pachitariu & Sahani (2013) in our experiments to improve model generalization. Inthe FOFE-based pooling function for HORNNs, we set the forgetting factor, , to 0.6. We haveused 400 nodes in each hidden layer for the PTB data set and 500 nodes per hidden layer for theEnglish text8 set. In our experiments, we do not use the dropout regularization Zaremba et al. (2014)in all experiments since it significantly slows down the training speed, not applicable to any largercorpora.11We will soon release the code for readers to reproduce all results reported in this paper.6Under review as a conference paper at ICLR 2017Table 1: Perplexities on the PTB test set for various HORNNs are shown as a function of order (2,3, 4). Note the perplexity of a regular RNN (1st order) is 123, as reported in Mikolov et al. (2011).Models 2ndorder 3rdorder 4thorderHORNN 111 108 109Max HORNN 110 109 108FOFE HORNN 103 101 100Gated HORNN 102 100 1004.1 L ANGUAGE MODELING ON PTBThe standard Penn Treebank (PTB) corpus consists of about 1M words. The vocabulary size islimited to 10k. The preprocessing method and the way to split data into training/validation/testsets are the same as Mikolov et al. (2011). PTB is a relatively small text corpus. We first investigatevarious model configurations for the HORNNs based on PTB and then compare the best performancewith other results reported on this task.4.1.1 E FFECT OF ORDERS IN HORNN SIn the first experiment, we first investigate how the used orders in HORNNs may affect the per-formance of language models (as measured by perplexity). We have examined all different higherorder model structures proposed in this paper, including HORNNs and various pooling functionsin HORNNs. The orders of these examined models varies among 2, 3 and 4. We have listed theperformance of different models on PTB in Table 1. As we may see, we are able to achieve a sig-nificant improvement in perplexity when using higher order RNNs for language models on PTB,roughly 10-20 reduction in PPL over regular RNNs. We can see that performance may improveslightly when the order is increased from 2 to 3 but no significant gain is observed when the orderis further increased to 4. As a result, we choose the 3rd-order HORNN structure for the followingexperiments. Among all different HORNN structures, we can see that FOFE-based pooling andgated structures yield the best performance on PTB.In language modeling, both input and output layers account for the major portion of model parame-ters. Therefore, we do not significantly increase model size when we go to higher order structures.For example, in Table 1, a regular RNN contains about 8.3 millions of weights while a 3rd-orderHORNN (the same for max or FOFE pooling structures) has about 8.6 millions of weights. In com-parison, an LSTM model has about 9.3 millions of weights and a 3rd-order gated HORNN has about9.6 millions of weights.As for the training speed, most HORNN models are only slightly slower than regular RNNs. Forexample, one epoch of training on PTB running in one NVIDIA’s TITAN X GPU takes about 80seconds for an RNN, about 120 seconds for a 3rd-order HORNN (the same for max or FOFE poolingstructures). Similarly, training of gated HORNNs is also slightly slower than LSTMs. For example,one epoch on PTB takes about 200 seconds for an LSTM, and about 225 seconds for a 3rd-ordergates HORNN.4.1.2 M ODEL COMPARISON ON PENN TREEBANKAt last, we report the best performance of various HORNNs on the PTB test set in Table 2. We com-pare our 3rd-order HORNNs with all other models reported on this task, including RNN Mikolovet al. (2011), stack RNN Pascanu et al. (2014), deep RNN Pascanu et al. (2014), FOFE-FNN Zhanget al. (2015) and LSTM Graves (2013).2From the results in Table 2, we can see that our proposedhigher order RNN architectures significantly outperform all other baseline models reported on thistask. Both FOFE-based pooling and gated HORNNs have achieved the state-of-the-art performance,2All models in Table 2 do not use the dropout regularization, which is somehow equivalent to data augmen-tation. In Zaremba et al. (2014); Kim et al. (2015), the proposed LSTM-LMs (word level or character level)achieve lower perplexity but they both use the dropout regularization and much bigger models and it takes daysto train the models, which is not applicable to other larger tasks.7Under review as a conference paper at ICLR 2017Table 2: Perplexities on the PTB test set forvarious examined models.Models TestKN 5-gram Mikolov et al. (2011) 141RNN Mikolov et al. (2011) 123CSLM5Aransa et al. (2015) 118.08LSTM Graves (2013) 117genCNN Wang et al. (2015) 116.4Gated word&charMiyamoto & Cho (2016) 113.52E2E Mem Net Sukhbaatar et al. (2015) 111Stack RNN Pascanu et al. (2014) 110Deep RNN Pascanu et al. (2014) 107FOFE-FNN Zhang et al. (2015) 108HORNN ( 3rdorder) 108Max HORNN ( 3rdorder) 109FOFE HORNN ( 3rdorder) 101Gated HORNN ( 3rdorder) 100Table 3: Perplexities on the text8 test set forvarious models.Models TestRNN Mikolov et al. (2014) 184LSTM Mikolov et al. (2014) 156SCRNN Mikolov et al. (2014) 161E2E Mem Net Sukhbaatar et al. (2015) 147HORNN ( 3rdorder) 172Max HORNN ( 3rdorder) 163FOFE HORNN ( 3rdorder) 154Gated HORNN ( 3rdorder) 144i.e., 100 in perplexity on this task. To the best of our knowledge, this is the best reported performanceon PTB under the same training condition.4.2 L ANGUAGE MODELING ON ENGLISH TEXT8In this experiment, we will evaluate our proposed HORNNs on a much larger text corpus, namelythe English text8 data set. The text8 data set contains a preprocessed version of the first 100 millioncharacters downloaded from the Wikipedia website. We have used the same preprocessing methodas Mikolov et al. (2014) to process the data set to generate the training and test sets. We havelimited the vocabulary size to about 44k by replacing all words occurring less than 10 times in thetraining set with an <UNK>token. The text8 set is about 20 times larger than PTB in corpussize. The model training on text8 takes longer to finish. We have not tuned hyperparameters in thisdata set. We simply follow the best setting used in PTB to train all HORNNs for the text8 dataset. Meanwhile, we also follow the same learning schedule used in Mikolov et al. (2014): We firstinitialize the learning rate to 0.5 and run 5 epochs using this learning rate; After that, the learningrate is halved at the end of every epoch.Because the training is time-consuming, we have only evaluated 3rd-order HORNNs on the text8data set. The perplexities of various HORNNs are summarized in Table 3. We have compared ourHORNNs with all other baseline models reported on this task, including RNN Mikolov et al. (2014),LSTM Mikolov et al. (2014), SCRNN Mikolov et al. (2014) and end-to-end memory networksSukhbaatar et al. (2015). Results have shown that all HORNN models work pretty well in this dataset except the normal HORNN significantly underperforms the other three models. Among them,the gated HORNN model has achieved the best performance, i.e., 144 in perplexity on this task,which is slightly better than the recent result obtained by end-to-end memory networks (using arather complicated structure). To the best of our knowledge, this is the best performance reportedon this task.5 C ONCLUSIONSIn this paper, we have proposed some new structures for recurrent neural networks, called as higherorder RNNs (HORNNs) . In these structures, we use more memory units to keep track of more pre-ceding RNN states, which are all fed along various feedback paths to the hidden layer to generatethe feedback signals. In this way, we may enhance the model to capture long term dependency insequential data. Moreover, we have proposed to use several types of pooling functions to calibratemultiple feedback paths. Experiments have shown that the pooling technique plays a critical rolein learning higher order RNNs effectively. In this work, we have examined HORNNs for the lan-guage modeling task using two popular data sets, namely the Penn Treebank (PTB) and text8 sets.Experimental results have shown that the proposed higher order RNNs yield the state-of-the-art per-8Under review as a conference paper at ICLR 2017formance on both data sets, significantly outperforming the regular RNNs as well as the popularLSTMs. As the future work, we are going to continue to explore HORNNs for other sequentialmodeling tasks, such as speech recognition, sequence-to-sequence modelling and so on.REFERENCESWalid Aransa, Holger Schwenk, and Lo ̈ıc Barrault. Improving continuous space language modelsusing auxiliary features. In Proceedings of the 12th International Workshop on Spoken LanguageTranslation , pp. 151–158, 2015.D. Bahdanau, K. Cho, and Y . Bengio. Neural machine translation by jointly learning to align andtranslate. In arXiv:1409.0473 , 2014.Y . Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 5(2):157–166, 1994.Y . Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journalof Machine Learning Research , 3:1137–1155, 2003.K. Cho, B. Van Merri ̈enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio.Learning phrase representations using RNN encoder-decoder for statistical machine translation.InProceedings of EMNLP , 2014.J. Chung, C. Gulcehre, K. Cho, and Y . Bengio. Gated feedback recurrent neural networks. InProceedings of International Conference on Machine Learning (ICML) , 2015.A. Graves. Generating sequences with recurrent neural networks. In arXiv:1308.0850 , 2013.A. Graves, A. Mohamed, and G Hinton. Speech recognition with deep recurrent neural. In Proceed-ings of ICASSP , 2013.K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InarXiv:1512.03385 , 2015.Salah Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies.InProceedings of Neural Information Processing Systems (NIPS) , 1996.S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Y . Kim, Y . Jernite, D. Sontag, and A. M. Rush. Character-aware neural language models. InarXiv:1508.06615 , 2015.J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork rnn. In Proceedings of Interna-tional Conference on Machine Learning (ICML) , 2014.C. Y . Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply supervised nets. In arXiv:1409.5185 ,2014.M. Liwicki, A. Graves, and H. Bunke. Neural networks for handwriting recognition, Book Chap-ter, Computational intelligence paradigms in advanced pattern classification. Springer BerlinHeidelberg, 2012.T. Mikolov. Statistical Language Models based on Neural Networks . PhD thesis, Brno Universityof Technology, 2012.T. Mikolov, S. Kombrink, L. Burget, J.H. ˇCernock `y, and S. Khudanpur. Extensions of recurrentneural network language model. In Proceedings ICASSP , pp. 5528–5531, 2011.T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in recurrentneural networks. In arXiv 1412.7753 , 2014.Yasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. arXivpreprint arXiv:1606.01700 , 2016.9Under review as a conference paper at ICLR 2017M. Pachitariu and M. Sahani. Regularization and nonlinearities for neural language models: whenare they needed? In arXiv:1301.5650 , 2013.R. Pascanu, C. Gulcehre, K. Cho, and Y . Bengio. How to construct deep recurrent neural networks.InProceedings of ICLR , 2014.H. T. Siegelmann and E. D. Sontag. On the computational power of neural nets. Journal of computerand system sciences , 50.(1):132–150, 1995.R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In Proceedings of NeuralInformation Processing Systems (NIPS) , 2015.S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. In Proceedingsof Neural Information Processing Systems (NIPS) , 2015.M. Sundermeyer, R. Schlter, and H. Ne. LSTM neural networks for language modeling. In Pro-ceedings of Interspeech , 2012.I. Sutskever, J. Martens, and G Hinton. Generating text with recurrent neural networks. In Proceed-ings of International Conference on Machine Learning (ICML) , 2011.I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. InProceedings of Neural Information Processing Systems (NIPS) , 2014.Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, and Qun Liu. gencnn: A convolutionalarchitecture for word sequence prediction. arXiv preprint arXiv:1503.05034 , 2015.P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of theIEEE , 78(10):1550–1560, 1990.Stefan Wermter. A hybrid and connectionist architecture for a scanning understanding. In Proceed-ings of the 10th European conference on Artificial intelligence , 1992.W. Zaremba, I. Sutskever, and O.l Vinyals. Recurrent neural network regularization. InarXiv:1409.2329 , 2014.S. Zhang, H. Jiang, M. Xu, J. Hou, and L. Dai. The fixed-size ordinally-forgetting encoding methodfor neural network language models. In Proceedings of ACL , pp. 495–500, 2015.10
ByFU_yNEx
ByZvfijeg
ICLR.cc/2017/conference/-/paper568/official/review
{"title": "can be improved", "rating": "6: Marginally above acceptance threshold", "review": "I think the backbone of the paper is interesting and could lead to something potentially quite useful. I like the idea of connecting signal processing with recurrent network and then using tools from one setting in the other. However, while the work has nuggets of very interesting observations, I feel they can be put together in a better way. \nI think the writeup and everything can be improved and I urge the authors to strive for this if the paper doesn't go through. I think some of the ideas of how to connect to the past are interesting, it would be nice to have more experiments or to try to understand better why this connections help and how.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Higher Order Recurrent Neural Networks
["Rohollah Soltani", "Hui Jiang"]
In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
["Deep learning", "Natural language processing"]
https://openreview.net/forum?id=ByZvfijeg
https://openreview.net/pdf?id=ByZvfijeg
https://openreview.net/forum?id=ByZvfijeg&noteId=ByFU_yNEx
Under review as a conference paper at ICLR 2017HIGHER ORDER RECURRENT NEURAL NETWORKSRohollah Soltani & Hui JiangDepartment of Computer Science and EngineeringYork UniversityToronto, CAfrsoltani,hjg@cse.yorku.caABSTRACTIn this paper, we study novel neural network structures to better model long termdependency in sequential data. We propose to use more memory units to keeptrack of more preceding states in recurrent neural networks (RNNs), which are allrecurrently fed to the hidden layers as feedback through different weighted paths.By extending the popular recurrent structure in RNNs, we provide the models withbetter short-term memory mechanism to learn long term dependency in sequences.Analogous to digital filters in signal processing, we call these structures as higherorder RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned usingthe back-propagation through time method. HORNNs are generally applicable toa variety of sequence modelling tasks. In this work, we have examined HORNNsfor the language modeling task using two popular data sets, namely the Penn Tree-bank (PTB) and English text8. Experimental results have shown that the proposedHORNNs yield the state-of-the-art performance on both data sets, significantlyoutperforming the regular RNNs as well as the popular LSTMs.1 I NTRODUCTIONIn the recent resurgence of neural networks in deep learning, deep neural networks have achievedsuccesses in various real-world applications, such as speech recognition, computer vision and naturallanguage processing. Deep neural networks (DNNs) with a deep architecture of multiple nonlinearlayers are an expressive model that can learn complex features and patterns in data. Each layer ofDNNs learns a representation and transfers them to the next layer and the next layer may continueto extract more complicated features, and finally the last layer generates the desirable output. Fromearly theoretical work, it is well known that neural networks may be used as the universal approx-imators to map from any fixed-size input to another fixed-size output. Recently, more and moreempirical results have demonstrated that the deep structure in DNNs is not just powerful in theorybut also can be reliably learned in practice from a large amount of training data.Sequential modeling is a challenging problem in machine learning, which has been extensively stud-ied in the past. Recently, many deep neural network based models have been successful in this area,as shown in various tasks such as language modeling Mikolov (2012), sequence generation Graves(2013); Sutskever et al. (2011), machine translation Sutskever et al. (2014) and speech recognitionGraves et al. (2013). Among various neural network models, recurrent neural networks (RNNs) areappealing for modeling sequential data because they can capture long term dependency in sequentialdata using a simple mechanism of recurrent feedback. RNNs can learn to model sequential data overan extended period of time, then carry out rather complicated transformations on the sequential data.RNNs have been theoretically proved to be a turing complete machine Siegelmann & Sontag (1995).RNNs in principle can learn to map from one variable-length sequence to another. When unfoldedin time, RNNs are equivalent to very deep neural networks that share model parameters and receivethe input at each time step. The recursion in the hidden layer of RNNs can act as an excellent mem-ory mechanism for the networks. In each time step, the learned recursion weights may decide whatinformation to discard and what information to keep in order to relay onwards along time. WhileRNNs are theoretically powerful, the learning of RNNs needs to use the back-propagation throughtime (BPTT) method Werbos (1990) due to the internal recurrent cycles. Unfortunately, in practice,it turns out to be rather difficult to train RNNs to capture long-term dependency due to the fact that1Under review as a conference paper at ICLR 2017the gradients in BPTT tend to either vanish or explode Bengio et al. (1994). Many heuristic meth-ods have been proposed to solve these problems. For example, a simple method, called gradientclipping , is used to avoid gradient explosion Mikolov (2012). However, RNNs still suffer from thevanishing gradient problem since the gradients decay gradually as they are back-propagated throughtime. As a result, some new recurrent structures are proposed, such as long short-term memory(LSTM) Hochreiter & Schmidhuber (1997) and gated recurrent unit (GRU) Cho et al. (2014). Thesemodels use some learnable gates to implement rather complicated feedback structures, which en-sure that some feedback paths can allow the gradients to flow back in time effectively. These modelshave given promising results in many practical applications, such as sequence modeling Graves(2013), language modeling Sundermeyer et al. (2012), hand-written character recognition Liwickiet al. (2012), machine translation Cho et al. (2014), speech recognition Graves et al. (2013).In this paper, we explore an alternative method to learn recurrent neural networks (RNNs) to modellong term dependency in sequential data. We propose to use more memory units to keep track ofmore preceding RNN states, which are all recurrently fed to the hidden layers as feedback throughdifferent weighted paths. Analogous to digital filters in signal processing, we call these new re-current structures as higher order recurrent neural networks (HORNNs). At each time step, theproposed HORNNs directly combine multiple preceding hidden states from various history timesteps, weighted by different matrices, to generate the feedback signal to each hidden layer. By ag-gregating more history information of the RNN states, HORNNs are provided with better short-termmemory mechanism than the regular RNNs. Moreover, those direct connections to more previousRNN states allow the gradients to flow back smoothly in the BPTT learning stage. All of theseensure that HORNNs can be more effectively learned to capture long term dependency. Similar toRNNs and LSTMs, the proposed HORNNs are general enough for variety of sequential modelingtasks. In this work, we have evaluated HORNNs for the language modeling task on two popular datasets, namely the Penn Treebank (PTB) and English text8 sets. Experimental results have shown thatHORNNs yield the state-of-the-art performance on both data sets, significantly outperforming theregular RNNs as well as the popular LSTMs.2 R ELATED WORKHierarchical recurrent neural network proposed in Hihi & Bengio (1996) is one of the earliest papersthat attempt to improve RNNs to capture long term dependency in a better way. It proposes to addlinear time delayed connections to RNNs to improve the gradient descent learning algorithm to finda better solution, eventually solving the gradient vanishing problem. However, in this early work,the idea of multi-resolution recurrent architectures has only been preliminarily examined for somesimple small-scale tasks. This work is somehow relevant to our work in this paper but the higherorder RNNs proposed here differs in several aspects. Firstly, we propose to use weighted connectionsin the structure, instead of simple multi-resolution short-cut paths. This makes our models fall intothe category of higher order models. Secondly, we have proposed to use various pooling functionsin generating the feedback signals, which is critical in normalizing the dynamic ranges of gradientsflowing from various paths. Our experiments have shown that the success of our models is largelyattributed to this technique.The most successful approach to deal with vanishing gradients so far is to use long short termmemory (LSTM) model Hochreiter & Schmidhuber (1997). LSTM relies on a fairly sophisticatedstructure made of gates to control flow of information to the hidden neurons. The drawback of theLSTM is that it is complicated and slow to learn. The complexity of this model makes the learningvery time consuming, and hard to scale for larger tasks. Another approach to address this issue isto add a hidden layer to RNNs Mikolov et al. (2014). This layer is responsible for capturing longerterm dependencies in input data by making its weight matrix close to identity. Recently, clock-work RNNs Koutnik et al. (2014) are proposed to address this problem as well, which splits eachhidden layer into several modules running at different clocks. Each module receives signals frominput and computes its output at a predefined clock rate. Gated feedback recurrent neural networksChung et al. (2015) attempt to implement a generalized version using the gated feedback connectionbetween layers of stacked RNNs, allowing the model to adaptively adjust the connection betweenconsecutive hidden layers.2Under review as a conference paper at ICLR 2017Besides, short-cut skipping connections were considered earlier in Wermter (1992), and more re-cently have been found useful in learning very deep feed-forward neural networks as well, such asLee et al. (2014); He et al. (2015). These skipping connections between various layers of neuralnetworks can improve the flow of information in both forward and backward passes. Among them,highway networks Srivastava et al. (2015) introduce rather sophisticated skipping connections be-tween layers, controlled by some gated functions.3 H IGHER ORDER RECURRENT NEURAL NETWORKSA recurrent neural network (RNN) is a type of neural network suitable for modeling a sequence ofarbitrary length. At each time step t, an RNN receives an input xt, the state of the RNN is updatedrecursively as follows (as shown in the left part of Figure 1):ht=f(Winxt+Whht1) (1)wheref()is an nonlinear activation function, such as sigmoid or rectified linear (ReLU), and Winis the weight matrix in the input layer and Whis the state to state recurrent weight matrix. Due tothe recursion, this hidden layer may act as a short-term memory of all previous input data.Given the state of the RNN, i.e., the current activation signals in the hidden layer ht, the RNNgenerates the output according to the following equation:yt=g(Woutht) (2)whereg()denotes the softmax function and Woutis the weight matrix in the output layer. In prin-ciple, this model can be trained using the back-propagation through time (BPTT) algorithm Wer-bos (1990). This model has been used widely in sequence modeling tasks like language modelingMikolov (2012).Figure 1: Comparison of model structures between an RNN (1st order) and a higher order RNN (3rdorder). The symbol z1denotes a time-delay unit (equivalent to a memory unit).3.1 H IGHER ORDER RNN S(HORNN S)RNNs are very deep in time and the hidden layer at each time step represents the entire input history,which acts as a short-term memory mechanism. However, due to the gradient vanishing problem inback-propagation, it turns out to be very difficult to learn RNNs to model long-term dependency insequential data.In this paper, we extend the standard RNN structure to better model long-term dependency in se-quential data. As shown in the right part of Figure 1, instead of using only the previous RNN state asthe feedback signal, we propose to employ multiple memory units to generate the feedback signal ateach time step by directly combining multiple preceding RNN states in the past, where these time-delayed RNN states go through separate feedback paths with different weight matrices. Analogousto the filter structures used in signal processing, we call this new recurrent structure as higher orderRNNs , HORNNs in short. The order of HORNNs depends on the number of memory units used forfeedback. For example, the model used in the right of Figure 1 is a 3rd-order HORNN. On the otherhand, regular RNNs may be viewed as 1st-order HORNNs.3Under review as a conference paper at ICLR 2017In HORNNs, the feedback signal is generated by combining multiple preceding RNN states. There-fore, the state of an N-th order HORNN is recursively updated as follows:ht=f Winxt+NXn=1Whnhtn!(3)wherefWhnjn= 1;Ngdenotes the weight matrices used for various feedback paths. Similar toFigure 2: Unfolding a 3rd-order HORNN Figure 3: Illustration of all back-propagationpaths in BPTT for a 3rd-order HORNN.RNNs, HORNNs can also be unfolded in time to get rid of the recurrent cycles. As shown in Figure2, we unfold a 3rd-order HORNN in time, which clearly shows that each HORNN state is explicitlydecided by the current input xtand all previous 3 states in the past. This structure looks similar tothe skipping short-cut paths in deep neural networks but each path in HORNNs maintains a learnableweight matrix. The new structure in HORNNs can significantly improve the model capacity to cap-ture long-term dependency in sequential data. At each time step, by explicitly aggregating multiplepreceding hidden activities, HORNNs may derive a good representation of the history informationin sequences, leading to a significantly enhanced short-term memory mechanism.During the backprop learning procedure, these skipping paths directly connected to more previoushidden states of HORNNs may allow the gradients to flow more easily back in time, which even-tually leads to a more effective learning of models to capture long term dependency in sequences.Therefore, this structure may help to largely alleviate the notorious problem of vanishing gradientsin the RNN learning.Obviously, HORNNs can be learned using the same BPTT algorithm as regular RNNs, except thatthe error signals at each time step need to be back-propagated to multiple feedback paths in thenetwork. As shown in Figure 3, for a 3rd-order HORNN, at each time step t, the error signal fromthe hidden layer htwill have to be back-propagated into four different paths: i) the first one back tothe input layer, xt; ii) three more feedback paths leading to three different histories in time scales,namely ht1,ht2andht3.Interestingly enough, if we use a fully-unfolded implementation for HORNNs as in Figure 2, theoverall computation complexity is comparable with regular RNNs. Given a whole sequence, we mayfirst simultaneously compute all hidden activities (from xttohtfor allt). Secondly, we recursivelyupdate htfor alltusing eq.(3). Finally, we use GPUs to compute all outputs together from theupdated hidden states (from httoytfor allt) based on eq.(2). The backward pass in learningcan also be implemented in the same three-step procedure. Except the recursive updates in thesecond step (this issue remains the same in regular RNNs), all remaining computation steps canbe formulated as large matrix multiplications. As a result, the computation of HORNNs can beimplemented fairly efficiently using GPUs.3.2 P OOLING FUNCTIONS FOR HORNN SAs discussed above, the shortcut paths in HORNNs may help the models to capture long-term de-pendency in sequential data. On the other hand, they may also complicate the learning in a differentway. Due to different numbers of hidden layers along various paths, the signals flowing from differ-ent paths may vary dramatically in the dynamic range. For example, in the forward pass in Figure2, three different feedback signals from different time scales, e.g. ht1,ht2andht3, flow into4Under review as a conference paper at ICLR 2017the hidden layer to compute the new hidden state ht. The dynamic range of these signals may varydramatically from case to case. The situation may get even worse in the backward pass during theBPTT learning. For example, in a 3rd-order HORNN in Figure 2, the node ht3may directly re-ceive an error signal from the node ht. In some cases, it may get so strong as to overshadow othererror signals coming from closer neighbours of ht1andht2. This may impede the learning ofHORNNs, yielding slow convergence or even poor performance.Here, we have proposed to use some pooling functions to calibrate the signals from different feed-back paths before they are used to recursively generate a new hidden state, as shown in Figure 4.In the following, we will investigate three different choices for the pooling function in Figure 4,including max-based pooling, FOFE-based pooling and gated pooling.3.2.1 M AX-BASED POOLINGMax-based pooling is a simple strategy that chooses the most responsive unit (exhibiting the largestactivation value) among various paths to transfer to the hidden layer to generate the new hiddenstate. Many biological experiments have shown that biological neuron networks tend to use a similarstrategy in learning and firing.In this case, instead of using eq.(3), we use the following formula to update the hidden state ofHORNNs:ht=fWinxt+ maxNn=1(Whnhtn)(4)where maximization is performed element-wisely to choose the maximum value in each dimensionto feed to the hidden layer to generate the new hidden state. The aim here is to capture the mostrelevant feature and map it to a fixed predefined size.The max pooling function is simple and biologically inspired. However, the max pooling strategyalso has some serious disadvantages. For example, it has no forgetting mechanism and the signalsmay get stronger and stronger. Furthermore, it loses the order information of the preceding historiessince it only choose the maximum values but it does not know where the maximum comes from.Figure 4: A pooling function is used to calibratevarious feedback paths in HORNNs.Figure 5: Gated HORNNs use learnable gates tocombine various feedback signals.3.2.2 FOFE- BASED POOLINGThe fixed-size ordinally-forgetting encoding (FOFE) method was proposed in Zhang et al. (2015)to encode any variable-length sequence of data into a fixed-size representation. In FOFE, a singleforgetting factor (0< < 1) is used to encode the position information in sequences basedon the idea of exponential forgetting to derive invertible fixed-size representations. In this work,we borrow this simple idea of exponential forgetting to calibrate all preceding histories using apre-selected forgetting factor as follows:ht=f Winxt+NXn=1nWhnhtn!(5)where the forgetting factor is manually pre-selected between 0< < 1. The above constantcoefficients related to play an important role in calibrating signals from different paths in both5Under review as a conference paper at ICLR 2017forward and backward passes of HORNNs since they slightly underweight the older history over therecent one in an explicit way.3.2.3 G ATED HORNN SIn this section, we follow the ideas of the learnable gates in LSTMs Hochreiter & Schmidhuber(1997) and GRUs Cho et al. (2014) as well as the recent soft-attention in Bahdanau et al. (2014).Instead of using constant coefficients derived from a forgetting factor, we may let the network auto-matically determine the combination weights based on the current state and input. In this case, wemay use sigmoid gates to compute combination weights to regulate the information flowing fromvarious feedback paths. The sigmoid gates take the current data and previous hidden state as inputto decide how to weight all of the precede hidden states. The gate function weights how the currenthidden state is generated based on all the previous time-steps of the hidden layer. This allows thenetwork to potentially remember information for a longer period of time. In a gated HORNN, thehidden state is recursively computed as follows:ht=f Winxt+NXn=1rnWhnhtn!(6)wheredenotes element-wise multiplication of two equally-sized vectors, and the gate signal rniscalculated asrn=(Wg1nxt+Wg2nhtn) (7)where()is the sigmoid function, and Wg1nandWg2ndenote two weight matrices introduced foreach gate.Note that the computation complexity of gated HORNNs is comparable with LSTMs and GRUs,significantly exceeding the other HORNN structures because of the overhead from the gate functionsin eq. (7).4 E XPERIMENTSIn this section, we evaluate the proposed higher order RNNs (HORNNs) on several language model-ing tasks. A statistical language model (LM) is a probability distribution over sequences of words innatural languages. Recently, neural networks have been successfully applied to language modelingBengio et al. (2003); Mikolov et al. (2011), yielding the state-of-the-art performance. In languagemodeling tasks, it is quite important to take advantage of the long-term dependency of natural lan-guages. Therefore, it is widely reported that RNN based LMs can outperform feedforward neuralnetworks in language modeling tasks. We have chosen two popular LM data sets, namely the PennTreebank (PTB) and English text8 sets, to compare our proposed HORNNs with traditional n-gramLMs, RNN-based LMs and the state-of-the-art performance obtained by LSTMs Graves (2013);Mikolov et al. (2014), FOFE based feedforward NNs Zhang et al. (2015) and memory networksSukhbaatar et al. (2015).In our experiments, we use the mini-batch stochastic gradient decent (SGD) algorithm to train allneural networks. The number of back-propagation through time (BPTT) steps is set to 30 for allrecurrent models. Each model update is conducted using a mini-batch of 20 subsequences, eachof which is of 30 in length. All model parameters (weight matrices in all layers) are randomlyinitialized based on a Gaussian distribution with zero mean and standard deviation of 0.1. A hardclipping is set to 5.0 to avoid gradient explosion during the BPTT learning. The initial learning rateis set to 0.5 and we halve the learning rate at the end of each epoch if the cross entropy functionon the validation set does not decrease. We have used the weight decay, momentum and columnnormalization Pachitariu & Sahani (2013) in our experiments to improve model generalization. Inthe FOFE-based pooling function for HORNNs, we set the forgetting factor, , to 0.6. We haveused 400 nodes in each hidden layer for the PTB data set and 500 nodes per hidden layer for theEnglish text8 set. In our experiments, we do not use the dropout regularization Zaremba et al. (2014)in all experiments since it significantly slows down the training speed, not applicable to any largercorpora.11We will soon release the code for readers to reproduce all results reported in this paper.6Under review as a conference paper at ICLR 2017Table 1: Perplexities on the PTB test set for various HORNNs are shown as a function of order (2,3, 4). Note the perplexity of a regular RNN (1st order) is 123, as reported in Mikolov et al. (2011).Models 2ndorder 3rdorder 4thorderHORNN 111 108 109Max HORNN 110 109 108FOFE HORNN 103 101 100Gated HORNN 102 100 1004.1 L ANGUAGE MODELING ON PTBThe standard Penn Treebank (PTB) corpus consists of about 1M words. The vocabulary size islimited to 10k. The preprocessing method and the way to split data into training/validation/testsets are the same as Mikolov et al. (2011). PTB is a relatively small text corpus. We first investigatevarious model configurations for the HORNNs based on PTB and then compare the best performancewith other results reported on this task.4.1.1 E FFECT OF ORDERS IN HORNN SIn the first experiment, we first investigate how the used orders in HORNNs may affect the per-formance of language models (as measured by perplexity). We have examined all different higherorder model structures proposed in this paper, including HORNNs and various pooling functionsin HORNNs. The orders of these examined models varies among 2, 3 and 4. We have listed theperformance of different models on PTB in Table 1. As we may see, we are able to achieve a sig-nificant improvement in perplexity when using higher order RNNs for language models on PTB,roughly 10-20 reduction in PPL over regular RNNs. We can see that performance may improveslightly when the order is increased from 2 to 3 but no significant gain is observed when the orderis further increased to 4. As a result, we choose the 3rd-order HORNN structure for the followingexperiments. Among all different HORNN structures, we can see that FOFE-based pooling andgated structures yield the best performance on PTB.In language modeling, both input and output layers account for the major portion of model parame-ters. Therefore, we do not significantly increase model size when we go to higher order structures.For example, in Table 1, a regular RNN contains about 8.3 millions of weights while a 3rd-orderHORNN (the same for max or FOFE pooling structures) has about 8.6 millions of weights. In com-parison, an LSTM model has about 9.3 millions of weights and a 3rd-order gated HORNN has about9.6 millions of weights.As for the training speed, most HORNN models are only slightly slower than regular RNNs. Forexample, one epoch of training on PTB running in one NVIDIA’s TITAN X GPU takes about 80seconds for an RNN, about 120 seconds for a 3rd-order HORNN (the same for max or FOFE poolingstructures). Similarly, training of gated HORNNs is also slightly slower than LSTMs. For example,one epoch on PTB takes about 200 seconds for an LSTM, and about 225 seconds for a 3rd-ordergates HORNN.4.1.2 M ODEL COMPARISON ON PENN TREEBANKAt last, we report the best performance of various HORNNs on the PTB test set in Table 2. We com-pare our 3rd-order HORNNs with all other models reported on this task, including RNN Mikolovet al. (2011), stack RNN Pascanu et al. (2014), deep RNN Pascanu et al. (2014), FOFE-FNN Zhanget al. (2015) and LSTM Graves (2013).2From the results in Table 2, we can see that our proposedhigher order RNN architectures significantly outperform all other baseline models reported on thistask. Both FOFE-based pooling and gated HORNNs have achieved the state-of-the-art performance,2All models in Table 2 do not use the dropout regularization, which is somehow equivalent to data augmen-tation. In Zaremba et al. (2014); Kim et al. (2015), the proposed LSTM-LMs (word level or character level)achieve lower perplexity but they both use the dropout regularization and much bigger models and it takes daysto train the models, which is not applicable to other larger tasks.7Under review as a conference paper at ICLR 2017Table 2: Perplexities on the PTB test set forvarious examined models.Models TestKN 5-gram Mikolov et al. (2011) 141RNN Mikolov et al. (2011) 123CSLM5Aransa et al. (2015) 118.08LSTM Graves (2013) 117genCNN Wang et al. (2015) 116.4Gated word&charMiyamoto & Cho (2016) 113.52E2E Mem Net Sukhbaatar et al. (2015) 111Stack RNN Pascanu et al. (2014) 110Deep RNN Pascanu et al. (2014) 107FOFE-FNN Zhang et al. (2015) 108HORNN ( 3rdorder) 108Max HORNN ( 3rdorder) 109FOFE HORNN ( 3rdorder) 101Gated HORNN ( 3rdorder) 100Table 3: Perplexities on the text8 test set forvarious models.Models TestRNN Mikolov et al. (2014) 184LSTM Mikolov et al. (2014) 156SCRNN Mikolov et al. (2014) 161E2E Mem Net Sukhbaatar et al. (2015) 147HORNN ( 3rdorder) 172Max HORNN ( 3rdorder) 163FOFE HORNN ( 3rdorder) 154Gated HORNN ( 3rdorder) 144i.e., 100 in perplexity on this task. To the best of our knowledge, this is the best reported performanceon PTB under the same training condition.4.2 L ANGUAGE MODELING ON ENGLISH TEXT8In this experiment, we will evaluate our proposed HORNNs on a much larger text corpus, namelythe English text8 data set. The text8 data set contains a preprocessed version of the first 100 millioncharacters downloaded from the Wikipedia website. We have used the same preprocessing methodas Mikolov et al. (2014) to process the data set to generate the training and test sets. We havelimited the vocabulary size to about 44k by replacing all words occurring less than 10 times in thetraining set with an <UNK>token. The text8 set is about 20 times larger than PTB in corpussize. The model training on text8 takes longer to finish. We have not tuned hyperparameters in thisdata set. We simply follow the best setting used in PTB to train all HORNNs for the text8 dataset. Meanwhile, we also follow the same learning schedule used in Mikolov et al. (2014): We firstinitialize the learning rate to 0.5 and run 5 epochs using this learning rate; After that, the learningrate is halved at the end of every epoch.Because the training is time-consuming, we have only evaluated 3rd-order HORNNs on the text8data set. The perplexities of various HORNNs are summarized in Table 3. We have compared ourHORNNs with all other baseline models reported on this task, including RNN Mikolov et al. (2014),LSTM Mikolov et al. (2014), SCRNN Mikolov et al. (2014) and end-to-end memory networksSukhbaatar et al. (2015). Results have shown that all HORNN models work pretty well in this dataset except the normal HORNN significantly underperforms the other three models. Among them,the gated HORNN model has achieved the best performance, i.e., 144 in perplexity on this task,which is slightly better than the recent result obtained by end-to-end memory networks (using arather complicated structure). To the best of our knowledge, this is the best performance reportedon this task.5 C ONCLUSIONSIn this paper, we have proposed some new structures for recurrent neural networks, called as higherorder RNNs (HORNNs) . In these structures, we use more memory units to keep track of more pre-ceding RNN states, which are all fed along various feedback paths to the hidden layer to generatethe feedback signals. In this way, we may enhance the model to capture long term dependency insequential data. Moreover, we have proposed to use several types of pooling functions to calibratemultiple feedback paths. Experiments have shown that the pooling technique plays a critical rolein learning higher order RNNs effectively. In this work, we have examined HORNNs for the lan-guage modeling task using two popular data sets, namely the Penn Treebank (PTB) and text8 sets.Experimental results have shown that the proposed higher order RNNs yield the state-of-the-art per-8Under review as a conference paper at ICLR 2017formance on both data sets, significantly outperforming the regular RNNs as well as the popularLSTMs. As the future work, we are going to continue to explore HORNNs for other sequentialmodeling tasks, such as speech recognition, sequence-to-sequence modelling and so on.REFERENCESWalid Aransa, Holger Schwenk, and Lo ̈ıc Barrault. Improving continuous space language modelsusing auxiliary features. In Proceedings of the 12th International Workshop on Spoken LanguageTranslation , pp. 151–158, 2015.D. Bahdanau, K. Cho, and Y . Bengio. Neural machine translation by jointly learning to align andtranslate. In arXiv:1409.0473 , 2014.Y . Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 5(2):157–166, 1994.Y . Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journalof Machine Learning Research , 3:1137–1155, 2003.K. Cho, B. Van Merri ̈enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio.Learning phrase representations using RNN encoder-decoder for statistical machine translation.InProceedings of EMNLP , 2014.J. Chung, C. Gulcehre, K. Cho, and Y . Bengio. Gated feedback recurrent neural networks. InProceedings of International Conference on Machine Learning (ICML) , 2015.A. Graves. Generating sequences with recurrent neural networks. In arXiv:1308.0850 , 2013.A. Graves, A. Mohamed, and G Hinton. Speech recognition with deep recurrent neural. In Proceed-ings of ICASSP , 2013.K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InarXiv:1512.03385 , 2015.Salah Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies.InProceedings of Neural Information Processing Systems (NIPS) , 1996.S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Y . Kim, Y . Jernite, D. Sontag, and A. M. Rush. Character-aware neural language models. InarXiv:1508.06615 , 2015.J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork rnn. In Proceedings of Interna-tional Conference on Machine Learning (ICML) , 2014.C. Y . Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply supervised nets. In arXiv:1409.5185 ,2014.M. Liwicki, A. Graves, and H. Bunke. Neural networks for handwriting recognition, Book Chap-ter, Computational intelligence paradigms in advanced pattern classification. Springer BerlinHeidelberg, 2012.T. Mikolov. Statistical Language Models based on Neural Networks . PhD thesis, Brno Universityof Technology, 2012.T. Mikolov, S. Kombrink, L. Burget, J.H. ˇCernock `y, and S. Khudanpur. Extensions of recurrentneural network language model. In Proceedings ICASSP , pp. 5528–5531, 2011.T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in recurrentneural networks. In arXiv 1412.7753 , 2014.Yasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. arXivpreprint arXiv:1606.01700 , 2016.9Under review as a conference paper at ICLR 2017M. Pachitariu and M. Sahani. Regularization and nonlinearities for neural language models: whenare they needed? In arXiv:1301.5650 , 2013.R. Pascanu, C. Gulcehre, K. Cho, and Y . Bengio. How to construct deep recurrent neural networks.InProceedings of ICLR , 2014.H. T. Siegelmann and E. D. Sontag. On the computational power of neural nets. Journal of computerand system sciences , 50.(1):132–150, 1995.R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In Proceedings of NeuralInformation Processing Systems (NIPS) , 2015.S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. In Proceedingsof Neural Information Processing Systems (NIPS) , 2015.M. Sundermeyer, R. Schlter, and H. Ne. LSTM neural networks for language modeling. In Pro-ceedings of Interspeech , 2012.I. Sutskever, J. Martens, and G Hinton. Generating text with recurrent neural networks. In Proceed-ings of International Conference on Machine Learning (ICML) , 2011.I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. InProceedings of Neural Information Processing Systems (NIPS) , 2014.Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, and Qun Liu. gencnn: A convolutionalarchitecture for word sequence prediction. arXiv preprint arXiv:1503.05034 , 2015.P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of theIEEE , 78(10):1550–1560, 1990.Stefan Wermter. A hybrid and connectionist architecture for a scanning understanding. In Proceed-ings of the 10th European conference on Artificial intelligence , 1992.W. Zaremba, I. Sutskever, and O.l Vinyals. Recurrent neural network regularization. InarXiv:1409.2329 , 2014.S. Zhang, H. Jiang, M. Xu, J. Hou, and L. Dai. The fixed-size ordinally-forgetting encoding methodfor neural network language models. In Proceedings of ACL , pp. 495–500, 2015.10
B1uDD-zVx
ByZvfijeg
ICLR.cc/2017/conference/-/paper568/official/review
{"title": "Incremental work", "rating": "3: Clear rejection", "review": "This paper proposes an idea of looking n-steps backward when modelling sequences with RNNs. The proposed RNN does not only use the previous hidden state (t-1) but also looks further back ( (t - k) steps, where k=1,2,3,4 ). The paper also proposes a few different ways to aggregate multiple hidden states from the past.\n\n\nThe reviewer can see few issues with this paper.\n\nFirstly, the writing of this paper requires improvement. The introduction and abstract are wasting too much space just to explain unrelated facts or to describe already well-known things in the literature. Some of the statements written in the paper are misleading. For instance, it explains, \u201cAmong various neural network models, recurrent neural networks (RNNs) are appealing for modeling sequential data because they can capture long term dependency in sequential data using a simple mechanism of recurrent feedback\u201d and then it says RNNs cannot actually capture long-term dependencies that well. RNNs are appealing in the first place because they can handle variable length sequences and can model temporal relationships between each symbol in a sequence. The criticism against LSTMs is hard to accept when it says: LSTMs are slow and because of the slowness, they are hard to scale at larger tasks. But we all know that some companies are already using gigantic seq2seq models for their production (LSTMs are used as building blocks in their systems). This indicates that the LSTMs can be practically used in a very large-scale setting.\n\n\nSecondly, the idea proposed in the paper is incremental and not new to the field. There are other previous works that propose to use direct connections to the previous hidden states [1]. However, the previous works do not use aggregation of multiple number of previous hidden states. Most importantly, the paper fails to deliver a proper analysis on whether its main contribution is actually helpful to improve the problem posed in the paper. The new architecture is said that it handles the long-term dependencies better, however, there is no rigorous proof or intuitive design in the architecture that help us to understand why it should work better. By the design of the architecture, and speaking in very high-level, it seems like the model maybe helpful to mitigate the vanishing gradients issue by a linear factor. It is always a good practice to have at least one page to analyze the empirical findings in the paper.\n\n\nThirdly, the baseline models used in this paper are very weak. Their are plenty of other models that are trained and tested on word-level language modelling task using Penn Treebank corpus, but the paper only contains a few of outdated models. I cannot fully agree on the statement \u201cTo the best of our knowledge, this is the best performance on PTB under the same training condition\u201d, these days, RNN-based methods usually score below 80 in terms of the test perplexity, which are far lower than 100 achieved in this paper.\n\n\n[1] Zhang et al., \u201cArchitectural Complexity Measures of Recurrent Neural Networks\u201d, NIPS\u201916\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Higher Order Recurrent Neural Networks
["Rohollah Soltani", "Hui Jiang"]
In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
["Deep learning", "Natural language processing"]
https://openreview.net/forum?id=ByZvfijeg
https://openreview.net/pdf?id=ByZvfijeg
https://openreview.net/forum?id=ByZvfijeg&noteId=B1uDD-zVx
Under review as a conference paper at ICLR 2017HIGHER ORDER RECURRENT NEURAL NETWORKSRohollah Soltani & Hui JiangDepartment of Computer Science and EngineeringYork UniversityToronto, CAfrsoltani,hjg@cse.yorku.caABSTRACTIn this paper, we study novel neural network structures to better model long termdependency in sequential data. We propose to use more memory units to keeptrack of more preceding states in recurrent neural networks (RNNs), which are allrecurrently fed to the hidden layers as feedback through different weighted paths.By extending the popular recurrent structure in RNNs, we provide the models withbetter short-term memory mechanism to learn long term dependency in sequences.Analogous to digital filters in signal processing, we call these structures as higherorder RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned usingthe back-propagation through time method. HORNNs are generally applicable toa variety of sequence modelling tasks. In this work, we have examined HORNNsfor the language modeling task using two popular data sets, namely the Penn Tree-bank (PTB) and English text8. Experimental results have shown that the proposedHORNNs yield the state-of-the-art performance on both data sets, significantlyoutperforming the regular RNNs as well as the popular LSTMs.1 I NTRODUCTIONIn the recent resurgence of neural networks in deep learning, deep neural networks have achievedsuccesses in various real-world applications, such as speech recognition, computer vision and naturallanguage processing. Deep neural networks (DNNs) with a deep architecture of multiple nonlinearlayers are an expressive model that can learn complex features and patterns in data. Each layer ofDNNs learns a representation and transfers them to the next layer and the next layer may continueto extract more complicated features, and finally the last layer generates the desirable output. Fromearly theoretical work, it is well known that neural networks may be used as the universal approx-imators to map from any fixed-size input to another fixed-size output. Recently, more and moreempirical results have demonstrated that the deep structure in DNNs is not just powerful in theorybut also can be reliably learned in practice from a large amount of training data.Sequential modeling is a challenging problem in machine learning, which has been extensively stud-ied in the past. Recently, many deep neural network based models have been successful in this area,as shown in various tasks such as language modeling Mikolov (2012), sequence generation Graves(2013); Sutskever et al. (2011), machine translation Sutskever et al. (2014) and speech recognitionGraves et al. (2013). Among various neural network models, recurrent neural networks (RNNs) areappealing for modeling sequential data because they can capture long term dependency in sequentialdata using a simple mechanism of recurrent feedback. RNNs can learn to model sequential data overan extended period of time, then carry out rather complicated transformations on the sequential data.RNNs have been theoretically proved to be a turing complete machine Siegelmann & Sontag (1995).RNNs in principle can learn to map from one variable-length sequence to another. When unfoldedin time, RNNs are equivalent to very deep neural networks that share model parameters and receivethe input at each time step. The recursion in the hidden layer of RNNs can act as an excellent mem-ory mechanism for the networks. In each time step, the learned recursion weights may decide whatinformation to discard and what information to keep in order to relay onwards along time. WhileRNNs are theoretically powerful, the learning of RNNs needs to use the back-propagation throughtime (BPTT) method Werbos (1990) due to the internal recurrent cycles. Unfortunately, in practice,it turns out to be rather difficult to train RNNs to capture long-term dependency due to the fact that1Under review as a conference paper at ICLR 2017the gradients in BPTT tend to either vanish or explode Bengio et al. (1994). Many heuristic meth-ods have been proposed to solve these problems. For example, a simple method, called gradientclipping , is used to avoid gradient explosion Mikolov (2012). However, RNNs still suffer from thevanishing gradient problem since the gradients decay gradually as they are back-propagated throughtime. As a result, some new recurrent structures are proposed, such as long short-term memory(LSTM) Hochreiter & Schmidhuber (1997) and gated recurrent unit (GRU) Cho et al. (2014). Thesemodels use some learnable gates to implement rather complicated feedback structures, which en-sure that some feedback paths can allow the gradients to flow back in time effectively. These modelshave given promising results in many practical applications, such as sequence modeling Graves(2013), language modeling Sundermeyer et al. (2012), hand-written character recognition Liwickiet al. (2012), machine translation Cho et al. (2014), speech recognition Graves et al. (2013).In this paper, we explore an alternative method to learn recurrent neural networks (RNNs) to modellong term dependency in sequential data. We propose to use more memory units to keep track ofmore preceding RNN states, which are all recurrently fed to the hidden layers as feedback throughdifferent weighted paths. Analogous to digital filters in signal processing, we call these new re-current structures as higher order recurrent neural networks (HORNNs). At each time step, theproposed HORNNs directly combine multiple preceding hidden states from various history timesteps, weighted by different matrices, to generate the feedback signal to each hidden layer. By ag-gregating more history information of the RNN states, HORNNs are provided with better short-termmemory mechanism than the regular RNNs. Moreover, those direct connections to more previousRNN states allow the gradients to flow back smoothly in the BPTT learning stage. All of theseensure that HORNNs can be more effectively learned to capture long term dependency. Similar toRNNs and LSTMs, the proposed HORNNs are general enough for variety of sequential modelingtasks. In this work, we have evaluated HORNNs for the language modeling task on two popular datasets, namely the Penn Treebank (PTB) and English text8 sets. Experimental results have shown thatHORNNs yield the state-of-the-art performance on both data sets, significantly outperforming theregular RNNs as well as the popular LSTMs.2 R ELATED WORKHierarchical recurrent neural network proposed in Hihi & Bengio (1996) is one of the earliest papersthat attempt to improve RNNs to capture long term dependency in a better way. It proposes to addlinear time delayed connections to RNNs to improve the gradient descent learning algorithm to finda better solution, eventually solving the gradient vanishing problem. However, in this early work,the idea of multi-resolution recurrent architectures has only been preliminarily examined for somesimple small-scale tasks. This work is somehow relevant to our work in this paper but the higherorder RNNs proposed here differs in several aspects. Firstly, we propose to use weighted connectionsin the structure, instead of simple multi-resolution short-cut paths. This makes our models fall intothe category of higher order models. Secondly, we have proposed to use various pooling functionsin generating the feedback signals, which is critical in normalizing the dynamic ranges of gradientsflowing from various paths. Our experiments have shown that the success of our models is largelyattributed to this technique.The most successful approach to deal with vanishing gradients so far is to use long short termmemory (LSTM) model Hochreiter & Schmidhuber (1997). LSTM relies on a fairly sophisticatedstructure made of gates to control flow of information to the hidden neurons. The drawback of theLSTM is that it is complicated and slow to learn. The complexity of this model makes the learningvery time consuming, and hard to scale for larger tasks. Another approach to address this issue isto add a hidden layer to RNNs Mikolov et al. (2014). This layer is responsible for capturing longerterm dependencies in input data by making its weight matrix close to identity. Recently, clock-work RNNs Koutnik et al. (2014) are proposed to address this problem as well, which splits eachhidden layer into several modules running at different clocks. Each module receives signals frominput and computes its output at a predefined clock rate. Gated feedback recurrent neural networksChung et al. (2015) attempt to implement a generalized version using the gated feedback connectionbetween layers of stacked RNNs, allowing the model to adaptively adjust the connection betweenconsecutive hidden layers.2Under review as a conference paper at ICLR 2017Besides, short-cut skipping connections were considered earlier in Wermter (1992), and more re-cently have been found useful in learning very deep feed-forward neural networks as well, such asLee et al. (2014); He et al. (2015). These skipping connections between various layers of neuralnetworks can improve the flow of information in both forward and backward passes. Among them,highway networks Srivastava et al. (2015) introduce rather sophisticated skipping connections be-tween layers, controlled by some gated functions.3 H IGHER ORDER RECURRENT NEURAL NETWORKSA recurrent neural network (RNN) is a type of neural network suitable for modeling a sequence ofarbitrary length. At each time step t, an RNN receives an input xt, the state of the RNN is updatedrecursively as follows (as shown in the left part of Figure 1):ht=f(Winxt+Whht1) (1)wheref()is an nonlinear activation function, such as sigmoid or rectified linear (ReLU), and Winis the weight matrix in the input layer and Whis the state to state recurrent weight matrix. Due tothe recursion, this hidden layer may act as a short-term memory of all previous input data.Given the state of the RNN, i.e., the current activation signals in the hidden layer ht, the RNNgenerates the output according to the following equation:yt=g(Woutht) (2)whereg()denotes the softmax function and Woutis the weight matrix in the output layer. In prin-ciple, this model can be trained using the back-propagation through time (BPTT) algorithm Wer-bos (1990). This model has been used widely in sequence modeling tasks like language modelingMikolov (2012).Figure 1: Comparison of model structures between an RNN (1st order) and a higher order RNN (3rdorder). The symbol z1denotes a time-delay unit (equivalent to a memory unit).3.1 H IGHER ORDER RNN S(HORNN S)RNNs are very deep in time and the hidden layer at each time step represents the entire input history,which acts as a short-term memory mechanism. However, due to the gradient vanishing problem inback-propagation, it turns out to be very difficult to learn RNNs to model long-term dependency insequential data.In this paper, we extend the standard RNN structure to better model long-term dependency in se-quential data. As shown in the right part of Figure 1, instead of using only the previous RNN state asthe feedback signal, we propose to employ multiple memory units to generate the feedback signal ateach time step by directly combining multiple preceding RNN states in the past, where these time-delayed RNN states go through separate feedback paths with different weight matrices. Analogousto the filter structures used in signal processing, we call this new recurrent structure as higher orderRNNs , HORNNs in short. The order of HORNNs depends on the number of memory units used forfeedback. For example, the model used in the right of Figure 1 is a 3rd-order HORNN. On the otherhand, regular RNNs may be viewed as 1st-order HORNNs.3Under review as a conference paper at ICLR 2017In HORNNs, the feedback signal is generated by combining multiple preceding RNN states. There-fore, the state of an N-th order HORNN is recursively updated as follows:ht=f Winxt+NXn=1Whnhtn!(3)wherefWhnjn= 1;Ngdenotes the weight matrices used for various feedback paths. Similar toFigure 2: Unfolding a 3rd-order HORNN Figure 3: Illustration of all back-propagationpaths in BPTT for a 3rd-order HORNN.RNNs, HORNNs can also be unfolded in time to get rid of the recurrent cycles. As shown in Figure2, we unfold a 3rd-order HORNN in time, which clearly shows that each HORNN state is explicitlydecided by the current input xtand all previous 3 states in the past. This structure looks similar tothe skipping short-cut paths in deep neural networks but each path in HORNNs maintains a learnableweight matrix. The new structure in HORNNs can significantly improve the model capacity to cap-ture long-term dependency in sequential data. At each time step, by explicitly aggregating multiplepreceding hidden activities, HORNNs may derive a good representation of the history informationin sequences, leading to a significantly enhanced short-term memory mechanism.During the backprop learning procedure, these skipping paths directly connected to more previoushidden states of HORNNs may allow the gradients to flow more easily back in time, which even-tually leads to a more effective learning of models to capture long term dependency in sequences.Therefore, this structure may help to largely alleviate the notorious problem of vanishing gradientsin the RNN learning.Obviously, HORNNs can be learned using the same BPTT algorithm as regular RNNs, except thatthe error signals at each time step need to be back-propagated to multiple feedback paths in thenetwork. As shown in Figure 3, for a 3rd-order HORNN, at each time step t, the error signal fromthe hidden layer htwill have to be back-propagated into four different paths: i) the first one back tothe input layer, xt; ii) three more feedback paths leading to three different histories in time scales,namely ht1,ht2andht3.Interestingly enough, if we use a fully-unfolded implementation for HORNNs as in Figure 2, theoverall computation complexity is comparable with regular RNNs. Given a whole sequence, we mayfirst simultaneously compute all hidden activities (from xttohtfor allt). Secondly, we recursivelyupdate htfor alltusing eq.(3). Finally, we use GPUs to compute all outputs together from theupdated hidden states (from httoytfor allt) based on eq.(2). The backward pass in learningcan also be implemented in the same three-step procedure. Except the recursive updates in thesecond step (this issue remains the same in regular RNNs), all remaining computation steps canbe formulated as large matrix multiplications. As a result, the computation of HORNNs can beimplemented fairly efficiently using GPUs.3.2 P OOLING FUNCTIONS FOR HORNN SAs discussed above, the shortcut paths in HORNNs may help the models to capture long-term de-pendency in sequential data. On the other hand, they may also complicate the learning in a differentway. Due to different numbers of hidden layers along various paths, the signals flowing from differ-ent paths may vary dramatically in the dynamic range. For example, in the forward pass in Figure2, three different feedback signals from different time scales, e.g. ht1,ht2andht3, flow into4Under review as a conference paper at ICLR 2017the hidden layer to compute the new hidden state ht. The dynamic range of these signals may varydramatically from case to case. The situation may get even worse in the backward pass during theBPTT learning. For example, in a 3rd-order HORNN in Figure 2, the node ht3may directly re-ceive an error signal from the node ht. In some cases, it may get so strong as to overshadow othererror signals coming from closer neighbours of ht1andht2. This may impede the learning ofHORNNs, yielding slow convergence or even poor performance.Here, we have proposed to use some pooling functions to calibrate the signals from different feed-back paths before they are used to recursively generate a new hidden state, as shown in Figure 4.In the following, we will investigate three different choices for the pooling function in Figure 4,including max-based pooling, FOFE-based pooling and gated pooling.3.2.1 M AX-BASED POOLINGMax-based pooling is a simple strategy that chooses the most responsive unit (exhibiting the largestactivation value) among various paths to transfer to the hidden layer to generate the new hiddenstate. Many biological experiments have shown that biological neuron networks tend to use a similarstrategy in learning and firing.In this case, instead of using eq.(3), we use the following formula to update the hidden state ofHORNNs:ht=fWinxt+ maxNn=1(Whnhtn)(4)where maximization is performed element-wisely to choose the maximum value in each dimensionto feed to the hidden layer to generate the new hidden state. The aim here is to capture the mostrelevant feature and map it to a fixed predefined size.The max pooling function is simple and biologically inspired. However, the max pooling strategyalso has some serious disadvantages. For example, it has no forgetting mechanism and the signalsmay get stronger and stronger. Furthermore, it loses the order information of the preceding historiessince it only choose the maximum values but it does not know where the maximum comes from.Figure 4: A pooling function is used to calibratevarious feedback paths in HORNNs.Figure 5: Gated HORNNs use learnable gates tocombine various feedback signals.3.2.2 FOFE- BASED POOLINGThe fixed-size ordinally-forgetting encoding (FOFE) method was proposed in Zhang et al. (2015)to encode any variable-length sequence of data into a fixed-size representation. In FOFE, a singleforgetting factor (0< < 1) is used to encode the position information in sequences basedon the idea of exponential forgetting to derive invertible fixed-size representations. In this work,we borrow this simple idea of exponential forgetting to calibrate all preceding histories using apre-selected forgetting factor as follows:ht=f Winxt+NXn=1nWhnhtn!(5)where the forgetting factor is manually pre-selected between 0< < 1. The above constantcoefficients related to play an important role in calibrating signals from different paths in both5Under review as a conference paper at ICLR 2017forward and backward passes of HORNNs since they slightly underweight the older history over therecent one in an explicit way.3.2.3 G ATED HORNN SIn this section, we follow the ideas of the learnable gates in LSTMs Hochreiter & Schmidhuber(1997) and GRUs Cho et al. (2014) as well as the recent soft-attention in Bahdanau et al. (2014).Instead of using constant coefficients derived from a forgetting factor, we may let the network auto-matically determine the combination weights based on the current state and input. In this case, wemay use sigmoid gates to compute combination weights to regulate the information flowing fromvarious feedback paths. The sigmoid gates take the current data and previous hidden state as inputto decide how to weight all of the precede hidden states. The gate function weights how the currenthidden state is generated based on all the previous time-steps of the hidden layer. This allows thenetwork to potentially remember information for a longer period of time. In a gated HORNN, thehidden state is recursively computed as follows:ht=f Winxt+NXn=1rnWhnhtn!(6)wheredenotes element-wise multiplication of two equally-sized vectors, and the gate signal rniscalculated asrn=(Wg1nxt+Wg2nhtn) (7)where()is the sigmoid function, and Wg1nandWg2ndenote two weight matrices introduced foreach gate.Note that the computation complexity of gated HORNNs is comparable with LSTMs and GRUs,significantly exceeding the other HORNN structures because of the overhead from the gate functionsin eq. (7).4 E XPERIMENTSIn this section, we evaluate the proposed higher order RNNs (HORNNs) on several language model-ing tasks. A statistical language model (LM) is a probability distribution over sequences of words innatural languages. Recently, neural networks have been successfully applied to language modelingBengio et al. (2003); Mikolov et al. (2011), yielding the state-of-the-art performance. In languagemodeling tasks, it is quite important to take advantage of the long-term dependency of natural lan-guages. Therefore, it is widely reported that RNN based LMs can outperform feedforward neuralnetworks in language modeling tasks. We have chosen two popular LM data sets, namely the PennTreebank (PTB) and English text8 sets, to compare our proposed HORNNs with traditional n-gramLMs, RNN-based LMs and the state-of-the-art performance obtained by LSTMs Graves (2013);Mikolov et al. (2014), FOFE based feedforward NNs Zhang et al. (2015) and memory networksSukhbaatar et al. (2015).In our experiments, we use the mini-batch stochastic gradient decent (SGD) algorithm to train allneural networks. The number of back-propagation through time (BPTT) steps is set to 30 for allrecurrent models. Each model update is conducted using a mini-batch of 20 subsequences, eachof which is of 30 in length. All model parameters (weight matrices in all layers) are randomlyinitialized based on a Gaussian distribution with zero mean and standard deviation of 0.1. A hardclipping is set to 5.0 to avoid gradient explosion during the BPTT learning. The initial learning rateis set to 0.5 and we halve the learning rate at the end of each epoch if the cross entropy functionon the validation set does not decrease. We have used the weight decay, momentum and columnnormalization Pachitariu & Sahani (2013) in our experiments to improve model generalization. Inthe FOFE-based pooling function for HORNNs, we set the forgetting factor, , to 0.6. We haveused 400 nodes in each hidden layer for the PTB data set and 500 nodes per hidden layer for theEnglish text8 set. In our experiments, we do not use the dropout regularization Zaremba et al. (2014)in all experiments since it significantly slows down the training speed, not applicable to any largercorpora.11We will soon release the code for readers to reproduce all results reported in this paper.6Under review as a conference paper at ICLR 2017Table 1: Perplexities on the PTB test set for various HORNNs are shown as a function of order (2,3, 4). Note the perplexity of a regular RNN (1st order) is 123, as reported in Mikolov et al. (2011).Models 2ndorder 3rdorder 4thorderHORNN 111 108 109Max HORNN 110 109 108FOFE HORNN 103 101 100Gated HORNN 102 100 1004.1 L ANGUAGE MODELING ON PTBThe standard Penn Treebank (PTB) corpus consists of about 1M words. The vocabulary size islimited to 10k. The preprocessing method and the way to split data into training/validation/testsets are the same as Mikolov et al. (2011). PTB is a relatively small text corpus. We first investigatevarious model configurations for the HORNNs based on PTB and then compare the best performancewith other results reported on this task.4.1.1 E FFECT OF ORDERS IN HORNN SIn the first experiment, we first investigate how the used orders in HORNNs may affect the per-formance of language models (as measured by perplexity). We have examined all different higherorder model structures proposed in this paper, including HORNNs and various pooling functionsin HORNNs. The orders of these examined models varies among 2, 3 and 4. We have listed theperformance of different models on PTB in Table 1. As we may see, we are able to achieve a sig-nificant improvement in perplexity when using higher order RNNs for language models on PTB,roughly 10-20 reduction in PPL over regular RNNs. We can see that performance may improveslightly when the order is increased from 2 to 3 but no significant gain is observed when the orderis further increased to 4. As a result, we choose the 3rd-order HORNN structure for the followingexperiments. Among all different HORNN structures, we can see that FOFE-based pooling andgated structures yield the best performance on PTB.In language modeling, both input and output layers account for the major portion of model parame-ters. Therefore, we do not significantly increase model size when we go to higher order structures.For example, in Table 1, a regular RNN contains about 8.3 millions of weights while a 3rd-orderHORNN (the same for max or FOFE pooling structures) has about 8.6 millions of weights. In com-parison, an LSTM model has about 9.3 millions of weights and a 3rd-order gated HORNN has about9.6 millions of weights.As for the training speed, most HORNN models are only slightly slower than regular RNNs. Forexample, one epoch of training on PTB running in one NVIDIA’s TITAN X GPU takes about 80seconds for an RNN, about 120 seconds for a 3rd-order HORNN (the same for max or FOFE poolingstructures). Similarly, training of gated HORNNs is also slightly slower than LSTMs. For example,one epoch on PTB takes about 200 seconds for an LSTM, and about 225 seconds for a 3rd-ordergates HORNN.4.1.2 M ODEL COMPARISON ON PENN TREEBANKAt last, we report the best performance of various HORNNs on the PTB test set in Table 2. We com-pare our 3rd-order HORNNs with all other models reported on this task, including RNN Mikolovet al. (2011), stack RNN Pascanu et al. (2014), deep RNN Pascanu et al. (2014), FOFE-FNN Zhanget al. (2015) and LSTM Graves (2013).2From the results in Table 2, we can see that our proposedhigher order RNN architectures significantly outperform all other baseline models reported on thistask. Both FOFE-based pooling and gated HORNNs have achieved the state-of-the-art performance,2All models in Table 2 do not use the dropout regularization, which is somehow equivalent to data augmen-tation. In Zaremba et al. (2014); Kim et al. (2015), the proposed LSTM-LMs (word level or character level)achieve lower perplexity but they both use the dropout regularization and much bigger models and it takes daysto train the models, which is not applicable to other larger tasks.7Under review as a conference paper at ICLR 2017Table 2: Perplexities on the PTB test set forvarious examined models.Models TestKN 5-gram Mikolov et al. (2011) 141RNN Mikolov et al. (2011) 123CSLM5Aransa et al. (2015) 118.08LSTM Graves (2013) 117genCNN Wang et al. (2015) 116.4Gated word&charMiyamoto & Cho (2016) 113.52E2E Mem Net Sukhbaatar et al. (2015) 111Stack RNN Pascanu et al. (2014) 110Deep RNN Pascanu et al. (2014) 107FOFE-FNN Zhang et al. (2015) 108HORNN ( 3rdorder) 108Max HORNN ( 3rdorder) 109FOFE HORNN ( 3rdorder) 101Gated HORNN ( 3rdorder) 100Table 3: Perplexities on the text8 test set forvarious models.Models TestRNN Mikolov et al. (2014) 184LSTM Mikolov et al. (2014) 156SCRNN Mikolov et al. (2014) 161E2E Mem Net Sukhbaatar et al. (2015) 147HORNN ( 3rdorder) 172Max HORNN ( 3rdorder) 163FOFE HORNN ( 3rdorder) 154Gated HORNN ( 3rdorder) 144i.e., 100 in perplexity on this task. To the best of our knowledge, this is the best reported performanceon PTB under the same training condition.4.2 L ANGUAGE MODELING ON ENGLISH TEXT8In this experiment, we will evaluate our proposed HORNNs on a much larger text corpus, namelythe English text8 data set. The text8 data set contains a preprocessed version of the first 100 millioncharacters downloaded from the Wikipedia website. We have used the same preprocessing methodas Mikolov et al. (2014) to process the data set to generate the training and test sets. We havelimited the vocabulary size to about 44k by replacing all words occurring less than 10 times in thetraining set with an <UNK>token. The text8 set is about 20 times larger than PTB in corpussize. The model training on text8 takes longer to finish. We have not tuned hyperparameters in thisdata set. We simply follow the best setting used in PTB to train all HORNNs for the text8 dataset. Meanwhile, we also follow the same learning schedule used in Mikolov et al. (2014): We firstinitialize the learning rate to 0.5 and run 5 epochs using this learning rate; After that, the learningrate is halved at the end of every epoch.Because the training is time-consuming, we have only evaluated 3rd-order HORNNs on the text8data set. The perplexities of various HORNNs are summarized in Table 3. We have compared ourHORNNs with all other baseline models reported on this task, including RNN Mikolov et al. (2014),LSTM Mikolov et al. (2014), SCRNN Mikolov et al. (2014) and end-to-end memory networksSukhbaatar et al. (2015). Results have shown that all HORNN models work pretty well in this dataset except the normal HORNN significantly underperforms the other three models. Among them,the gated HORNN model has achieved the best performance, i.e., 144 in perplexity on this task,which is slightly better than the recent result obtained by end-to-end memory networks (using arather complicated structure). To the best of our knowledge, this is the best performance reportedon this task.5 C ONCLUSIONSIn this paper, we have proposed some new structures for recurrent neural networks, called as higherorder RNNs (HORNNs) . In these structures, we use more memory units to keep track of more pre-ceding RNN states, which are all fed along various feedback paths to the hidden layer to generatethe feedback signals. In this way, we may enhance the model to capture long term dependency insequential data. Moreover, we have proposed to use several types of pooling functions to calibratemultiple feedback paths. Experiments have shown that the pooling technique plays a critical rolein learning higher order RNNs effectively. In this work, we have examined HORNNs for the lan-guage modeling task using two popular data sets, namely the Penn Treebank (PTB) and text8 sets.Experimental results have shown that the proposed higher order RNNs yield the state-of-the-art per-8Under review as a conference paper at ICLR 2017formance on both data sets, significantly outperforming the regular RNNs as well as the popularLSTMs. As the future work, we are going to continue to explore HORNNs for other sequentialmodeling tasks, such as speech recognition, sequence-to-sequence modelling and so on.REFERENCESWalid Aransa, Holger Schwenk, and Lo ̈ıc Barrault. Improving continuous space language modelsusing auxiliary features. In Proceedings of the 12th International Workshop on Spoken LanguageTranslation , pp. 151–158, 2015.D. Bahdanau, K. Cho, and Y . Bengio. Neural machine translation by jointly learning to align andtranslate. In arXiv:1409.0473 , 2014.Y . Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent isdifficult. IEEE Transactions on Neural Networks , 5(2):157–166, 1994.Y . Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. Journalof Machine Learning Research , 3:1137–1155, 2003.K. Cho, B. Van Merri ̈enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y . Bengio.Learning phrase representations using RNN encoder-decoder for statistical machine translation.InProceedings of EMNLP , 2014.J. Chung, C. Gulcehre, K. Cho, and Y . Bengio. Gated feedback recurrent neural networks. InProceedings of International Conference on Machine Learning (ICML) , 2015.A. Graves. Generating sequences with recurrent neural networks. In arXiv:1308.0850 , 2013.A. Graves, A. Mohamed, and G Hinton. Speech recognition with deep recurrent neural. In Proceed-ings of ICASSP , 2013.K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InarXiv:1512.03385 , 2015.Salah Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies.InProceedings of Neural Information Processing Systems (NIPS) , 1996.S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780,1997.Y . Kim, Y . Jernite, D. Sontag, and A. M. Rush. Character-aware neural language models. InarXiv:1508.06615 , 2015.J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork rnn. In Proceedings of Interna-tional Conference on Machine Learning (ICML) , 2014.C. Y . Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu. Deeply supervised nets. In arXiv:1409.5185 ,2014.M. Liwicki, A. Graves, and H. Bunke. Neural networks for handwriting recognition, Book Chap-ter, Computational intelligence paradigms in advanced pattern classification. Springer BerlinHeidelberg, 2012.T. Mikolov. Statistical Language Models based on Neural Networks . PhD thesis, Brno Universityof Technology, 2012.T. Mikolov, S. Kombrink, L. Burget, J.H. ˇCernock `y, and S. Khudanpur. Extensions of recurrentneural network language model. In Proceedings ICASSP , pp. 5528–5531, 2011.T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in recurrentneural networks. In arXiv 1412.7753 , 2014.Yasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. arXivpreprint arXiv:1606.01700 , 2016.9Under review as a conference paper at ICLR 2017M. Pachitariu and M. Sahani. Regularization and nonlinearities for neural language models: whenare they needed? In arXiv:1301.5650 , 2013.R. Pascanu, C. Gulcehre, K. Cho, and Y . Bengio. How to construct deep recurrent neural networks.InProceedings of ICLR , 2014.H. T. Siegelmann and E. D. Sontag. On the computational power of neural nets. Journal of computerand system sciences , 50.(1):132–150, 1995.R. K. Srivastava, K. Greff, and J. Schmidhuber. Highway networks. In Proceedings of NeuralInformation Processing Systems (NIPS) , 2015.S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. In Proceedingsof Neural Information Processing Systems (NIPS) , 2015.M. Sundermeyer, R. Schlter, and H. Ne. LSTM neural networks for language modeling. In Pro-ceedings of Interspeech , 2012.I. Sutskever, J. Martens, and G Hinton. Generating text with recurrent neural networks. In Proceed-ings of International Conference on Machine Learning (ICML) , 2011.I. Sutskever, O. Vinyals, and Q. Le. Sequence to sequence learning with neural networks. InProceedings of Neural Information Processing Systems (NIPS) , 2014.Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, and Qun Liu. gencnn: A convolutionalarchitecture for word sequence prediction. arXiv preprint arXiv:1503.05034 , 2015.P. J. Werbos. Backpropagation through time: what it does and how to do it. Proceedings of theIEEE , 78(10):1550–1560, 1990.Stefan Wermter. A hybrid and connectionist architecture for a scanning understanding. In Proceed-ings of the 10th European conference on Artificial intelligence , 1992.W. Zaremba, I. Sutskever, and O.l Vinyals. Recurrent neural network regularization. InarXiv:1409.2329 , 2014.S. Zhang, H. Jiang, M. Xu, J. Hou, and L. Dai. The fixed-size ordinally-forgetting encoding methodfor neural network language models. In Proceedings of ACL , pp. 495–500, 2015.10
S11_FWM4l
SkxKPDv5xl
ICLR.cc/2017/conference/-/paper393/official/review
{"title": "", "rating": "9: Top 15% of accepted papers, strong accept", "review": "The paper proposed a novel SampleRNN to directly model waveform signals and achieved better performance both in terms of objective test NLL and subjective A/B tests. \n\nAs mentioned in the discussions, the current status of the paper lack plenty of details in describing their model. Hopefully, this will be addressed in the final version.\n\nThe authors attempted to compare with wavenet model, but they didn't manage to get a model better than the baseline LSTM-RNN, which makes all the comparisons to wavenets less convincing. Hence, instead of wasting time and space comparing to wavenet, detailing the proposed model would be better. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
["Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio"]
In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
["Speech", "Deep learning", "Unsupervised Learning", "Applications"]
https://openreview.net/forum?id=SkxKPDv5xl
https://openreview.net/pdf?id=SkxKPDv5xl
https://openreview.net/forum?id=SkxKPDv5xl&noteId=S11_FWM4l
Published as a conference paper at ICLR 2017SAMPLE RNN: A NUNCONDITIONAL END-TO-ENDNEURAL AUDIO GENERATION MODELSoroush MehriUniversity of MontrealKundan KumarIIT KanpurIshaan GulrajaniUniversity of MontrealRithesh KumarSSNCEShubham JainIIT KanpurJose SoteloUniversity of MontrealAaron CourvilleUniversity of MontrealCIFAR FellowYoshua BengioUniversity of MontrealCIFAR Senior FellowABSTRACTIn this paper we propose a novel model for unconditional audio generation basedon generating one audio sample at a time. We show that our model, which profitsfrom combining memory-less modules, namely autoregressive multilayer percep-trons, and stateful recurrent neural networks in a hierarchical structure is able tocapture underlying sources of variations in the temporal sequences over very longtime spans, on three datasets of different nature. Human evaluation on the gener-ated samples indicate that our model is preferred over competing models. We alsoshow how each component of the model contributes to the exhibited performance.1 I NTRODUCTIONAudio generation is a challenging task at the core of many problems of interest, such as text-to-speech synthesis, music synthesis and voice conversion. The particular difficulty of audio generationis that there is often a very large discrepancy between the dimensionality of the the raw audio signaland that of the effective semantic-level signal. Consider the task of speech synthesis, where we aretypically interested in generating utterances corresponding to full sentences. Even at a relatively lowsample rate of 16kHz, on average we will have 6,000 samples per word generated.1Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it intospectral or hand-engineered features and defining the generative model over these features. However,when the generated signal is eventually decompressed into audio waveforms, the sample quality isoften degraded and requires extensive domain-expert corrective measures. This results in compli-cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a stepin the direction of replacing these handcrafted systems.In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependenciesin audio data. We believe RNNs are well suited as they have been designed and are suited solutionsfor these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practiceit is a known problem of these models to not scale well at such a high temporal resolution as is foundwhen generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one ofthe reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu &Koltun (2015) to show extremely good performance.In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presentedwhile keeping all the computations tractable.2Since our model has different modules operatingat different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating theamount of computational resources in modeling different levels of abstraction. In particular, wecan potentially allocate very limited resource to the module responsible for sample level alignments1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes.dlugan.com/speaking-rate/2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https://soundcloud.com/samplernn/sets1Published as a conference paper at ICLR 2017operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resourcesin modeling dependencies which vary very slowly in audio, for example identity of phoneme beingspoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies atmultiple levels of abstraction.Hence, our contribution is threefold:1. We present a novel method that utilizes RNNs at different scales to model longer term de-pendencies in audio waveforms while training on short sequences which results in memoryefficiency during training.2. We extensively explore and compare variants of models achieving the above effect.3. We study and empirically evaluate the impact of different components of our model onthree audio datasets. Human evaluation also has been conducted to test these generativemodels.2 S AMPLE RNN M ODELIn this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms.SampleRNN models the probability of a sequence of waveform samples X=fx1;x2;:::;xTg(a random variable over input data sequences) as the product of the probabilities of each sampleconditioned on all previous samples:p(X) =T1Yi=0p(xi+1jx1;:::;xi) (1)RNNs are commonly used to model sequential data which can be formulated as:ht=H(ht1;xi=t) (2)p(xi+1jx1;:::;xi) =Softmax (MLP (ht)) (3)withHbeing one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014),Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia-tions (Section 3). However, raw audio signals are challenging to model because they contain struc-ture at very different scales: correlations exist between neighboring samples as well as between onesthousands of samples apart.SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at adifferent temporal resolution. The lowest module processes individual samples, and each highermodule operates on an increasingly longer timescale and a lower temporal resolution. Each moduleconditions the module below it, with the lowest module outputting sample-level predictions. Theentire hierarchy is trained jointly end-to-end by backpropagation.2.1 F RAME -LEVEL MODULESRather than operating on individual samples, the higher-level modules in SampleRNN operate onnon-overlapping frames ofFS(k)(“Frame Size”) samples at the kthlevel up in the hierarchy at atime (frames denoted by f(k)). Each frame-level module is a deep RNN which summarizes thehistory of its inputs into a conditioning vector for the next module downward.The variable number of frames we condition upon up to timestep t1is expressed by a fixed lengthhidden state or memory h(k)twheretis related to clock rate at that tier. The RNN makes a memoryupdate at timestep tas a function of the previous memory h(k)t1and an input inp(k)t. This input fortop tierk=Kis simply the input frame. For intermediate tiers ( 1<k <K ) this input is a linearcombination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.Because different modules operate at different temporal resolutions, we need to upsample eachvectorcat the output of a module into a series of r(k)vectors (where r(k)is the ratio between thetemporal resolutions of the modules) before feeding it into the input of the next module downward(Eq. 6). We do this with a set of r(k)separate linear projections.2Published as a conference paper at ICLR 2017Figure 1: Snapshot of the unrolled model at timestep iwithK= 3 tiers. As a simplification onlyone RNN and up-sampling ratio r= 4is used for all tiers.Here we are formalizing the frame-level module in tier k. Note that following equations are exclusiveto tierkand timestep tfor that specific tier. To increase the readability, unless necessary superscript(k)is not shown for t,inp(k),W(k)x,h(k),H(k),W(k)j, andr(k).inpt=(Wxf(k)t+c(k+1)t; 1<k<Kf(k=K)t ; k=K(4)ht=H(ht1;inpt) (5)c(k)(t1)r+j=Wjht; 1jr (6)Our approach of upsampling with r(k)linear projections is exactly equivalent to upsampling byadding zeros and then applying a linear convolution. This is sometimes called “perforated” upsam-pling in the context of convolutional neural networks (CNNs). It was first demonstrated to workwell in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.2.2 S AMPLE -LEVEL MODULEThe lowest module (tier k= 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution overa samplexi+1, conditioned on the FS(1)preceding samples as well as a vector c(k=2)i from thenext higher module which encodes information about the sequence prior to that frame. As FS(1)isusually a small value and correlations in nearby samples are easy to model by a simple memorylessmodule, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speedsup the training. Assuming eirepresentsxiafter passing through embedding layer (section 2.2.1),conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutivesample-level frames are shown. In addition, Wxin Eq. 8 is simply used to linearly combine a frameand conditioning vector from above.f(1)i1=flatten ([eiFS(1);:::;ei1]) (7)f(1)i=flatten ([eiFS(1)+1;:::;ei])inp(1)i=W(1)xf(1)i+c(2)i (8)p(xi+1jx1;:::;xi) =Softmax (MLP (inp(1)i)) (9)We use a Softmax because we found that better results were obtained by discretizing the audiosignals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather thanusing a Gaussian or Gaussian mixture to represent the conditional density of the original real-valuedsignal. When processing an audio sequence, the MLP is convolved over the sequence, processing3Published as a conference paper at ICLR 2017each window of FS(1)samples and predicting the next sample. At generation time, the MLP is runrepeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baselinemodel RNN and this model, suggesting that the proposed hierarchically structured architecture ofSampleRNN makes a big difference.2.2.1 O UTPUT QUANTIZATIONThe sample-level module models its output as a q-way discrete distribution over possible quantizedvalues ofxi(that is, the output layer of the MLP is a q-way Softmax).To demonstrate the importance of a discrete output distribution, we apply the same architecture onreal-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) outputdistribution. Table 2 shows that our model outperforms an RNN baseline even when both modelsuse real-valued outputs. However, samples from the real-valued model are almost indistinguishablefrom random noise.In this work we use linear quantization with q= 256 , corresponding to a per-sample bit depth of 8.Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam-ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonablequality and are artifact-free.In addition, early on we noticed that the model can achieve better performance and generation qualitywhen we embed the quantized input values before passing them through the sample-level MLP (seeTable 4). The embedding steps maps each of the qdiscrete values to a real-valued vector embedding.However, real-valued raw samples are still used as input to the higher modules.2.2.2 C ONDITIONALLY INDEPENDENT SAMPLE OUTPUTSTo demonstrate the importance of a sample-level autoregressive module, we try replacing it with“Multi-Softmax” (see Table 4), where the prediction of each sample xidepends only on the con-ditioning vector cfrom Eq. 9. In this configuration, the model outputs an entire frame ofFS(1)samples at a time, modeling all samples in a frame as conditionally independent of each other. Wefind that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig-nificantly worse in terms of log-likelihood and fails to generate convincing samples. This suggeststhat modeling the joint distribution of the acoustic samples inside each frame is very important inorder to obtain good acoustic generation. We found this to be true even when the frame size is re-duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at atime.2.3 T RUNCATED BPTTTraining recurrent neural networks on long sequences can be very computationally expensive. Oordet al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con-nections. However, when they can be trained efficiently, recurrent networks have been shown to bevery powerful and expressive sequence models. We enable efficient training of our recurrent modelusing truncated backpropagation through time , splitting each sequence into short subsequences andpropagating gradients only to the beginning of each subsequence. We experiment with differentsubsequence lengths and demonstrate that we are able to train our networks, which model verylong-term dependencies, despite backpropagating through relatively short subsequences.Table 3 shows that by increasing the subsequence length, performance substantially increases along-side with train-time memory usage and convergence time. Yet it is noteworthy that our best modelshave been trained on subsequences of length 512, which corresponds to 32 milliseconds, a smallfraction of the length of a single a phoneme of human speech while generated samples exhibitlonger word-like structures.Despite the aforementioned fact, this generative model can mimic the existing long-term structureof the data which results in more natural and coherent samples that is preferred by human listeners.(More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specializedframe-level modules (Section 2.1) with top tiers designed to model a lower resolution of signalwhile leaving the process of filling the details to lower tiers.4Published as a conference paper at ICLR 20173 E XPERIMENTS AND RESULTSIn this section we are introducing three datasets which have been chosen to evaluate the proposedarchitecture for modeling raw acoustic sequences. The description of each dataset and their prepro-cessing is as follows:Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task,contains 315 hours of a single female voice actor in English; however, for our experimentswe are using only 20.5 hours. The training/validation/test split is 86%-7%-7%.Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, ishuman vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di-versity of sound type and the fact that these sounds were recorded from 51 actors and manycategories makes it a challenging task. To add to that, this data is extremely unbalanced.The training/validation/test split is 92%-4%-4%.Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available onhttps://archive.org/ amounting to 10 hours of non-vocal audio. The training/val-idation/test split is 88%-6%-6%.See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For allthe datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Musicdatasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long se-quences on which we will perform truncated backpropagation through time. Each sequence in theOnomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models onthis dataset, zero-padding has been applied to make all the sequences in a mini-batch have the samelength and corresponding cost values (for the predictions over the added 0s) would be ignored whencomputing the gradients.We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs,the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015)and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4components), normalization is applied per audio sample with the global mean and standard deviationobtained from the train split. For most of our experiments where the model demands discrete input,binning was applied per audio sample.All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra-dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma& Ba, 2014) ( 1= 0:9,2= 0:999, and= 1e8) with an initial learning rate of 0.001 wasused to adjust the parameters. For training each model, random search over hyper-parameter val-ues (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based modelswas always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all thelinear layers in the model (except for the embedding layer) to accelerate the training procedure. Sizeof the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weightmatrices used for hidden-to-hidden connections and other weight matrices initialized similar to Heet al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was thethe number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) andMLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for firsttwo layers and 256 for the final layer before softmax). Also FS(1)=FS(2)= 2 andFS(3)= 8were found to result in lowest NLL.3.1 W AVENETRE-IMPLEMENTATIONWe implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we wouldhave liked to replicate their model exactly but owing to missing details of architecture and hyper-parameters, as well as limited compute power at our disposal, we made our own design choices sothat the model would fit on a single GPU while having a receptive field of around 250 milliseconds,3Courtesy of Ubisoft5Published as a conference paper at ICLR 2017Real dataBlizzard Onomatopoeia MusicSampleRNN(2-tier)SampleRNN(3-tier) Real dataSampleRNN(2-tier)SampleRNN(3-tier)Figure 2: Examples from the datasets compared to samples from our models. In the first 3 rows, 2seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1and 4 are ground truth from which one can see how the datasets look different and have complexstructure in low resolution which the frame-level component of the SampleRNN is designed tocapture. Samples also to some extent mimic the same global structure. At the same time, zoomed-insamples of our model shows that it can perfectly resemble the high resolution structure present inthe data as well.Table 1: Test NLL in bits for three presented datasets.Model Blizzard Onomatopoeia MusicRNN (Eq. 2) 1.434 2.034 1.410WaveNet (re-impl.) 1.480 2.285 1.464SampleRNN (2-tier) 1.392 2.026 1.076SampleRNN (3-tier) 1.387 1.990 1.159Table 2: Average NLL on Blizzard test set for real-valued models.Model Average Test NLLRNN-GMM -2.415SampleRNN-GMM (2-tier) -2.7826Published as a conference paper at ICLR 2017Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzardvalidation set.Subsequence Length 32 64 128 256 512NLL Validation 1.575 1.468 1.412 1.391 1.364Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN areprovided to compare the contribution of each component in performance.Model NLL Test (Validation)SampleRNN (2-tier) 1.392 (1.369)Without Embedding 1.566 (1.539)Multi-Softmax 1.685 (1.656)while having a reasonable number of updates per unit time. Although our model is very similar toWaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer,length of target sequence to train on simultaneously (one can train with a single target with all sam-ples in the receptive field as input or with target sequence length of size T with input of size receptivefield + T - 1), batch-size, etc. might make our implementation different from what the authors havedone in the original WaveNet model. Hence, we note here that although we did our best at exactlyreproducing their results, there would very likely be different choice of hyper-parameters betweenour implementation and the one of the authors.For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilatedconvolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive fieldof 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time stept,p(xi) =f(xi1;xi2;:::xi4092)whereis model parameters. We train on target sequencelength of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the numberof output channels is 64 for each dilated convolutional layer (128 filters in total due to gated non-linearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these modelsfor about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzardexperiment to 0.0001 after around 3 days of training.3.2 H UMAN EVALUATIONApart from reporting NLL, we conducted AB preference tests for random samples from four modelstrained on the Blizzard dataset. For unconditional generation of speech which at best sounds likemumbling, this type of test is the one which is more suited. Competing models were the RNN,SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of themodels were excluded as the quality of samples were definitely lower and also to keep the numberof pair comparison tests manageable. We will release the samples that have been used in this testtoo.All the samples were set to have the same volume. Every user is then shown a set of twenty pairsof samples with one random pair at a time. Each pair had samples from two different models. Thehuman evaluator is asked to listen to the samples and had the option of choosing between the twomodel or choosing not to prefer any of them. Hence, we have a quantification of preference betweenevery pair of models. We used the online tool made publicly available by Jillings et al. (2015).Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in termsof preference by human raters, then SampleRNN (2-tier) and afterward two other models, whichmatches with the performance comparison in Table 1.The same evaluation was conducted for Music dataset except for an additional filtering process ofsamples. Specific to only this dataset, we observed that a batch of generated samples from competingmodels (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were eithermusic-like or random noise. For all these models we only considered random samples that were notrandom noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.7Published as a conference paper at ICLR 201779.0 18.0 3.0020406080100Preference percentage2-tierRNNNo-Pref.84.2 8.9 6.90204060801003-tierRNN No-Pref.22.4 63.3 14.3020406080100WaveN.RNNNo-Pref.84.8 10.1 5.1020406080100Preference percentage3-tier2-tierNo-Pref.60.2 32.0 7.80204060801002-tierWaveN.No-Pref.89.0 7.0 4.00204060801003-tierWaveN.No-Pref.Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted onsamples generated from models trained on Blizzard dataset.85.1 2.3 12.60204060801002-tierRNNNo-Pref.83.5 4.7 11.80204060801003-tierRNNNo-Pref.32.6 57.0 10.5020406080100Preference percentage3-tier2-tierNo-Pref.Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted onsamples generated from models trained on Music dataset.3.3 Q UANTIFYING INFORMATION RETENTIONFor the last experiment we are interested in measuring the memory span of the model. We trainedour model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers readingaudio books, one male and one female, respectively, with mean fundamental frequency of 125.3and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessedsimilar to Blizzard. We observed that it learned to stay consistent generating samples from the samespeaker without having any knowledge about the speaker ID or any other conditioning information.This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimesmixes two different categories of sounds.Another experiment was conducted to test the effect of memory and study the effective memoryhorizon. We inject 1 second of silence in the middle of sampling procedure in order to see if itwill remember to generate from the same speaker or not. Initially when sampling we let the modelgenerate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back thegenerated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 secondsagain we sample normally; feeding back the generated token.We did classification based on mean fundamental frequency of speakers for the first and last 2seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.8Published as a conference paper at ICLR 2017This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silenttokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50%chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).4 R ELATED WORKOur work is related to earlier work on auto-regressive multi-layer neural networks, startingwith Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix-elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over unitsof the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi-tion, we transform the joint distribution of acoustic samples using Eq. 1.The idea of having part of the model running at different clock rates is related to multi-scaleRNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015;Serban et al., 2016).Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditionalapproaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Leeet al. (2009).Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the abovecomparisons, and makes it interesting to compare the effect of adding higher-level RNN stagesworking at a low resolution. Similar to this work, our models generate one acoustic sample at a timeconditioned on all previously generated samples. We also share the preprocessing step of quantizingthe acoustics into bins. Unlike this model, we have different modules in our models running atdifferent clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependencywith hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states tothe next training sequence although the gradient of the loss will not take into account the samples inprevious training sequence.5 D ISCUSSION AND CONCLUSIONWe propose a novel model that can address unconditional audio generation in the raw acousticdomain, which typically has been done until recently with hand-crafted features. We are able toshow that a hierarchy of time scales and frequent updates will help to overcome the problem ofmodeling extremely high-resolution temporal data. That allows us, for this particular application, tolearn the data manifold directly from audio samples. We show that this model can generalize welland generate samples on three datasets that are different in nature. We also show that the samplesgenerated by this model are preferred by human raters.Success in this application, with a general-purpose solution as proposed here, opens up room formore improvement when specific domain knowledge is applied. This method, however, proposedwith audio generation application in mind, can easily be adapted to other tasks that require learningthe representation of sequential data with high temporal resolution and long-range complex struc-ture.ACKNOWLEDGMENTSThe authors would like to thank Jo ̃ao Felipe Santos and Kyle Kastner for insightful comments anddiscussion. We would like to thank the Theano Development Team (2016)4and MILA staff. Weacknowledge the support of the following agencies for research funding and computing support:NSERC, Calcul Qu ́ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Soteloalso thanks the Consejo Nacional de Ciencia y Tecnolog ́ıa (CONACyT) as well as the Secretar ́ıa deEducaci ́on P ́ublica (SEP) for their support. This work was a collaboration with Ubisoft.4http://deeplearning.net/software/theano/9Published as a conference paper at ICLR 2017REFERENCESYoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neuralnetworks. In NIPS , volume 99, pp. 400–406, 1999.James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal ofMachine Learning Research , 13(Feb):281–305, 2012.Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditoryfilter banks using non-negative matrix factorisation. In 2008 IEEE International Conference onAcoustics, Speech and Signal Processing , pp. 4713–4716. IEEE, 2008.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben-gio. A recurrent latent variable model for sequential data. In Advances in neural informationprocessing systems , pp. 2980–2988, 2015.Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener-ate chairs, tables and cars with convolutional networks. 2016.Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen-cies. In NIPS , volume 400, pp. 409. Citeseer, 1995.Felix Gers. Long short-term memory in recurrent neural networks . PhD thesis, Universit ̈at Han-nover, 2001.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprintarXiv:1308.0850 , 2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 1026–1034, 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss. Web Audio Evaluation Tool:A browser-based listening test environment. In 12th Sound and Music Computing Conference ,July 2015.Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathyblog, 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXivpreprint arXiv:1402.3511 , 2014.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning foraudio classification using convolutional deep belief networks. In Advances in neural informationprocessing systems , pp. 1096–1104, 2009.Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla,P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013–indian language task. In Blizzard Challenge Workshop 2013 , 2013.10Published as a conference paper at ICLR 2017Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac-celerate training of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.J ̈urgen Schmidhuber. Learning complex, extended sequences using the principle of history com-pression. Neural Computation , 4(2):234–242, 1992.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu-tation , pp. 153–164. Springer, 1999.Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, andJian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug-gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl-edge Management , pp. 553–562. ACM, 2015.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688 .Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and KeiichiroOura. Speech synthesis based on hidden markov models. Proceedings of the IEEE , 101(5):1234–1252, 2013.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.arXiv preprint arXiv:1601.06759 , 2016.Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXivpreprint arXiv:1511.07122 , 2015.Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.APPENDIX AAMODEL VARIANT : SAMPLE RNN-W AVENETHYBRIDSampleRNN-WaveNet model has two modules operating at two different clock-rate. The slowerclock-rate module (frame-level module) sees one frame (each of which has size FS) at a time whilethe faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. theratio of clock-rates for these two modules would be the size of a single frame. Number of sequentialsteps for frame-level component would be FStimes lower. We repeat the output of each step offrame-level component FStimes so that number of time-steps for output of both the componentsmatch. The output of both these modules are concatenated for every time-step which is furtheroperated by non-linearities for every time-step independently before generating the final output.In our experiments, we kept size of a single frame ( FS) to be 128. We tried two variants of thismodel: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet,both modules described above are implemented using dilated convolutions as described in originalWaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to modelthe dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive fieldof size 509 samples from the past.Although these models are designed with the intention of combining the two models to harness theirbest features, preliminary experiments show that this variant is not meeting our expectations at themoment which directs us to a possible future work.11
SJeCTJfVx
SkxKPDv5xl
ICLR.cc/2017/conference/-/paper393/official/review
{"title": "", "rating": "8: Top 50% of accepted papers, clear accept", "review": "The paper introduces SampleRNN, a hierarchical recurrent neural network model of raw audio. The model is trained end-to-end and evaluated using log-likelihood and by human judgement of unconditional samples, on three different datasets covering speech and music. This evaluation shows the proposed model to compare favourably to the baselines.\n\nIt is shown that the subsequence length used for truncated BPTT affects performance significantly, but interestingly, a subsequence length of 512 samples (~32 ms) is sufficient to get good results, even though the features of the data that are modelled span much longer timescales. This is an interesting and somewhat unintuitive result that I think warrants a bit more discussion.\n\nThe authors have attempted to reimplement WaveNet, an alternative model of raw audio that is fully convolutional. They were unable to reproduce the exact model architecture from the original paper, but have attempted to build an instance of the model with a receptive field of about 250ms that could be trained in a reasonable time using their computational resources, which is commendable.\n\nThe architecture of the Wavenet model is described in detail, but it found it challenging to find the same details for the proposed SampleRNN architecture (e.g. which value of \"r\" is used for the different tiers, how many units per layer, ...). I think a comparison in terms of computational cost, training time and number of parameters would also be very informative.\n\nSurprisingly, Table 1 shows a vanilla RNN (LSTM) substantially outperforming this model in terms of likelihood, which is quite suspicious as LSTMs tend to have effective receptive fields of a few hundred timesteps at best. One would expect the much larger receptive field of the Wavenet model to be reflected in the likelihood scores to some extent. Similarly, Figure 3 shows the vanilla RNN outperforming the Wavenet reimplementation in human evaluation on the Blizzard dataset. This raises questions about the implementation of the latter. Some discussion about this result and whether the authors expected it or not would be very welcome.\n\nTable 1 and Figure 4 also show the 2-tier SampleRNN outperforming the 3-tier model in terms of likelihood and human rating respectively, which is very counterintuitive as one would expect longer-range temporal correlations to be even more relevant for music than for speech. This is not discussed at all, I think it would be useful to comment on why this could be happening.\n\nOverall, this an interesting attempt to tackle modelling very long sequences with long-range temporal correlations and the results are quite convincing, even if the same can't always be said of the comparison with the baselines. It would be interesting to see how the model performs for conditional generation, seeing as it can be more easily be objectively compared to models like Wavenet in that domain.\n\n\n\nOther remarks:\n\n- upsampling the output of the models is done with r separate linear projections. This choice of upsampling method is not motivated. Why not just use linear interpolation or nearest neighbour upsampling? What is the advantage of learning this operation? Don't the r linear projections end up learning largely the same thing, give or take some noise?\n\n- The third paragraph of Section 2.1.1 indicates that 8-bit linear PCM was used. This is in contrast to Wavenet, for which an 8-bit mu-law encoding was used, and this supposedly improves the audio fidelity of the samples. Did you try this as well?\n\n- Section 2.1 mentions the discretisation of the input and the use of a softmax to model this discretised input, without any reference to prior work that made the same observation. A reference is given in 2.1.1, but it should probably be moved up a bit to avoid giving the impression that this is a novel observation.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
["Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio"]
In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
["Speech", "Deep learning", "Unsupervised Learning", "Applications"]
https://openreview.net/forum?id=SkxKPDv5xl
https://openreview.net/pdf?id=SkxKPDv5xl
https://openreview.net/forum?id=SkxKPDv5xl&noteId=SJeCTJfVx
Published as a conference paper at ICLR 2017SAMPLE RNN: A NUNCONDITIONAL END-TO-ENDNEURAL AUDIO GENERATION MODELSoroush MehriUniversity of MontrealKundan KumarIIT KanpurIshaan GulrajaniUniversity of MontrealRithesh KumarSSNCEShubham JainIIT KanpurJose SoteloUniversity of MontrealAaron CourvilleUniversity of MontrealCIFAR FellowYoshua BengioUniversity of MontrealCIFAR Senior FellowABSTRACTIn this paper we propose a novel model for unconditional audio generation basedon generating one audio sample at a time. We show that our model, which profitsfrom combining memory-less modules, namely autoregressive multilayer percep-trons, and stateful recurrent neural networks in a hierarchical structure is able tocapture underlying sources of variations in the temporal sequences over very longtime spans, on three datasets of different nature. Human evaluation on the gener-ated samples indicate that our model is preferred over competing models. We alsoshow how each component of the model contributes to the exhibited performance.1 I NTRODUCTIONAudio generation is a challenging task at the core of many problems of interest, such as text-to-speech synthesis, music synthesis and voice conversion. The particular difficulty of audio generationis that there is often a very large discrepancy between the dimensionality of the the raw audio signaland that of the effective semantic-level signal. Consider the task of speech synthesis, where we aretypically interested in generating utterances corresponding to full sentences. Even at a relatively lowsample rate of 16kHz, on average we will have 6,000 samples per word generated.1Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it intospectral or hand-engineered features and defining the generative model over these features. However,when the generated signal is eventually decompressed into audio waveforms, the sample quality isoften degraded and requires extensive domain-expert corrective measures. This results in compli-cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a stepin the direction of replacing these handcrafted systems.In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependenciesin audio data. We believe RNNs are well suited as they have been designed and are suited solutionsfor these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practiceit is a known problem of these models to not scale well at such a high temporal resolution as is foundwhen generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one ofthe reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu &Koltun (2015) to show extremely good performance.In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presentedwhile keeping all the computations tractable.2Since our model has different modules operatingat different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating theamount of computational resources in modeling different levels of abstraction. In particular, wecan potentially allocate very limited resource to the module responsible for sample level alignments1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes.dlugan.com/speaking-rate/2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https://soundcloud.com/samplernn/sets1Published as a conference paper at ICLR 2017operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resourcesin modeling dependencies which vary very slowly in audio, for example identity of phoneme beingspoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies atmultiple levels of abstraction.Hence, our contribution is threefold:1. We present a novel method that utilizes RNNs at different scales to model longer term de-pendencies in audio waveforms while training on short sequences which results in memoryefficiency during training.2. We extensively explore and compare variants of models achieving the above effect.3. We study and empirically evaluate the impact of different components of our model onthree audio datasets. Human evaluation also has been conducted to test these generativemodels.2 S AMPLE RNN M ODELIn this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms.SampleRNN models the probability of a sequence of waveform samples X=fx1;x2;:::;xTg(a random variable over input data sequences) as the product of the probabilities of each sampleconditioned on all previous samples:p(X) =T1Yi=0p(xi+1jx1;:::;xi) (1)RNNs are commonly used to model sequential data which can be formulated as:ht=H(ht1;xi=t) (2)p(xi+1jx1;:::;xi) =Softmax (MLP (ht)) (3)withHbeing one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014),Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia-tions (Section 3). However, raw audio signals are challenging to model because they contain struc-ture at very different scales: correlations exist between neighboring samples as well as between onesthousands of samples apart.SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at adifferent temporal resolution. The lowest module processes individual samples, and each highermodule operates on an increasingly longer timescale and a lower temporal resolution. Each moduleconditions the module below it, with the lowest module outputting sample-level predictions. Theentire hierarchy is trained jointly end-to-end by backpropagation.2.1 F RAME -LEVEL MODULESRather than operating on individual samples, the higher-level modules in SampleRNN operate onnon-overlapping frames ofFS(k)(“Frame Size”) samples at the kthlevel up in the hierarchy at atime (frames denoted by f(k)). Each frame-level module is a deep RNN which summarizes thehistory of its inputs into a conditioning vector for the next module downward.The variable number of frames we condition upon up to timestep t1is expressed by a fixed lengthhidden state or memory h(k)twheretis related to clock rate at that tier. The RNN makes a memoryupdate at timestep tas a function of the previous memory h(k)t1and an input inp(k)t. This input fortop tierk=Kis simply the input frame. For intermediate tiers ( 1<k <K ) this input is a linearcombination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.Because different modules operate at different temporal resolutions, we need to upsample eachvectorcat the output of a module into a series of r(k)vectors (where r(k)is the ratio between thetemporal resolutions of the modules) before feeding it into the input of the next module downward(Eq. 6). We do this with a set of r(k)separate linear projections.2Published as a conference paper at ICLR 2017Figure 1: Snapshot of the unrolled model at timestep iwithK= 3 tiers. As a simplification onlyone RNN and up-sampling ratio r= 4is used for all tiers.Here we are formalizing the frame-level module in tier k. Note that following equations are exclusiveto tierkand timestep tfor that specific tier. To increase the readability, unless necessary superscript(k)is not shown for t,inp(k),W(k)x,h(k),H(k),W(k)j, andr(k).inpt=(Wxf(k)t+c(k+1)t; 1<k<Kf(k=K)t ; k=K(4)ht=H(ht1;inpt) (5)c(k)(t1)r+j=Wjht; 1jr (6)Our approach of upsampling with r(k)linear projections is exactly equivalent to upsampling byadding zeros and then applying a linear convolution. This is sometimes called “perforated” upsam-pling in the context of convolutional neural networks (CNNs). It was first demonstrated to workwell in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.2.2 S AMPLE -LEVEL MODULEThe lowest module (tier k= 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution overa samplexi+1, conditioned on the FS(1)preceding samples as well as a vector c(k=2)i from thenext higher module which encodes information about the sequence prior to that frame. As FS(1)isusually a small value and correlations in nearby samples are easy to model by a simple memorylessmodule, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speedsup the training. Assuming eirepresentsxiafter passing through embedding layer (section 2.2.1),conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutivesample-level frames are shown. In addition, Wxin Eq. 8 is simply used to linearly combine a frameand conditioning vector from above.f(1)i1=flatten ([eiFS(1);:::;ei1]) (7)f(1)i=flatten ([eiFS(1)+1;:::;ei])inp(1)i=W(1)xf(1)i+c(2)i (8)p(xi+1jx1;:::;xi) =Softmax (MLP (inp(1)i)) (9)We use a Softmax because we found that better results were obtained by discretizing the audiosignals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather thanusing a Gaussian or Gaussian mixture to represent the conditional density of the original real-valuedsignal. When processing an audio sequence, the MLP is convolved over the sequence, processing3Published as a conference paper at ICLR 2017each window of FS(1)samples and predicting the next sample. At generation time, the MLP is runrepeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baselinemodel RNN and this model, suggesting that the proposed hierarchically structured architecture ofSampleRNN makes a big difference.2.2.1 O UTPUT QUANTIZATIONThe sample-level module models its output as a q-way discrete distribution over possible quantizedvalues ofxi(that is, the output layer of the MLP is a q-way Softmax).To demonstrate the importance of a discrete output distribution, we apply the same architecture onreal-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) outputdistribution. Table 2 shows that our model outperforms an RNN baseline even when both modelsuse real-valued outputs. However, samples from the real-valued model are almost indistinguishablefrom random noise.In this work we use linear quantization with q= 256 , corresponding to a per-sample bit depth of 8.Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam-ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonablequality and are artifact-free.In addition, early on we noticed that the model can achieve better performance and generation qualitywhen we embed the quantized input values before passing them through the sample-level MLP (seeTable 4). The embedding steps maps each of the qdiscrete values to a real-valued vector embedding.However, real-valued raw samples are still used as input to the higher modules.2.2.2 C ONDITIONALLY INDEPENDENT SAMPLE OUTPUTSTo demonstrate the importance of a sample-level autoregressive module, we try replacing it with“Multi-Softmax” (see Table 4), where the prediction of each sample xidepends only on the con-ditioning vector cfrom Eq. 9. In this configuration, the model outputs an entire frame ofFS(1)samples at a time, modeling all samples in a frame as conditionally independent of each other. Wefind that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig-nificantly worse in terms of log-likelihood and fails to generate convincing samples. This suggeststhat modeling the joint distribution of the acoustic samples inside each frame is very important inorder to obtain good acoustic generation. We found this to be true even when the frame size is re-duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at atime.2.3 T RUNCATED BPTTTraining recurrent neural networks on long sequences can be very computationally expensive. Oordet al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con-nections. However, when they can be trained efficiently, recurrent networks have been shown to bevery powerful and expressive sequence models. We enable efficient training of our recurrent modelusing truncated backpropagation through time , splitting each sequence into short subsequences andpropagating gradients only to the beginning of each subsequence. We experiment with differentsubsequence lengths and demonstrate that we are able to train our networks, which model verylong-term dependencies, despite backpropagating through relatively short subsequences.Table 3 shows that by increasing the subsequence length, performance substantially increases along-side with train-time memory usage and convergence time. Yet it is noteworthy that our best modelshave been trained on subsequences of length 512, which corresponds to 32 milliseconds, a smallfraction of the length of a single a phoneme of human speech while generated samples exhibitlonger word-like structures.Despite the aforementioned fact, this generative model can mimic the existing long-term structureof the data which results in more natural and coherent samples that is preferred by human listeners.(More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specializedframe-level modules (Section 2.1) with top tiers designed to model a lower resolution of signalwhile leaving the process of filling the details to lower tiers.4Published as a conference paper at ICLR 20173 E XPERIMENTS AND RESULTSIn this section we are introducing three datasets which have been chosen to evaluate the proposedarchitecture for modeling raw acoustic sequences. The description of each dataset and their prepro-cessing is as follows:Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task,contains 315 hours of a single female voice actor in English; however, for our experimentswe are using only 20.5 hours. The training/validation/test split is 86%-7%-7%.Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, ishuman vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di-versity of sound type and the fact that these sounds were recorded from 51 actors and manycategories makes it a challenging task. To add to that, this data is extremely unbalanced.The training/validation/test split is 92%-4%-4%.Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available onhttps://archive.org/ amounting to 10 hours of non-vocal audio. The training/val-idation/test split is 88%-6%-6%.See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For allthe datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Musicdatasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long se-quences on which we will perform truncated backpropagation through time. Each sequence in theOnomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models onthis dataset, zero-padding has been applied to make all the sequences in a mini-batch have the samelength and corresponding cost values (for the predictions over the added 0s) would be ignored whencomputing the gradients.We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs,the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015)and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4components), normalization is applied per audio sample with the global mean and standard deviationobtained from the train split. For most of our experiments where the model demands discrete input,binning was applied per audio sample.All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra-dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma& Ba, 2014) ( 1= 0:9,2= 0:999, and= 1e8) with an initial learning rate of 0.001 wasused to adjust the parameters. For training each model, random search over hyper-parameter val-ues (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based modelswas always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all thelinear layers in the model (except for the embedding layer) to accelerate the training procedure. Sizeof the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weightmatrices used for hidden-to-hidden connections and other weight matrices initialized similar to Heet al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was thethe number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) andMLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for firsttwo layers and 256 for the final layer before softmax). Also FS(1)=FS(2)= 2 andFS(3)= 8were found to result in lowest NLL.3.1 W AVENETRE-IMPLEMENTATIONWe implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we wouldhave liked to replicate their model exactly but owing to missing details of architecture and hyper-parameters, as well as limited compute power at our disposal, we made our own design choices sothat the model would fit on a single GPU while having a receptive field of around 250 milliseconds,3Courtesy of Ubisoft5Published as a conference paper at ICLR 2017Real dataBlizzard Onomatopoeia MusicSampleRNN(2-tier)SampleRNN(3-tier) Real dataSampleRNN(2-tier)SampleRNN(3-tier)Figure 2: Examples from the datasets compared to samples from our models. In the first 3 rows, 2seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1and 4 are ground truth from which one can see how the datasets look different and have complexstructure in low resolution which the frame-level component of the SampleRNN is designed tocapture. Samples also to some extent mimic the same global structure. At the same time, zoomed-insamples of our model shows that it can perfectly resemble the high resolution structure present inthe data as well.Table 1: Test NLL in bits for three presented datasets.Model Blizzard Onomatopoeia MusicRNN (Eq. 2) 1.434 2.034 1.410WaveNet (re-impl.) 1.480 2.285 1.464SampleRNN (2-tier) 1.392 2.026 1.076SampleRNN (3-tier) 1.387 1.990 1.159Table 2: Average NLL on Blizzard test set for real-valued models.Model Average Test NLLRNN-GMM -2.415SampleRNN-GMM (2-tier) -2.7826Published as a conference paper at ICLR 2017Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzardvalidation set.Subsequence Length 32 64 128 256 512NLL Validation 1.575 1.468 1.412 1.391 1.364Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN areprovided to compare the contribution of each component in performance.Model NLL Test (Validation)SampleRNN (2-tier) 1.392 (1.369)Without Embedding 1.566 (1.539)Multi-Softmax 1.685 (1.656)while having a reasonable number of updates per unit time. Although our model is very similar toWaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer,length of target sequence to train on simultaneously (one can train with a single target with all sam-ples in the receptive field as input or with target sequence length of size T with input of size receptivefield + T - 1), batch-size, etc. might make our implementation different from what the authors havedone in the original WaveNet model. Hence, we note here that although we did our best at exactlyreproducing their results, there would very likely be different choice of hyper-parameters betweenour implementation and the one of the authors.For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilatedconvolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive fieldof 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time stept,p(xi) =f(xi1;xi2;:::xi4092)whereis model parameters. We train on target sequencelength of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the numberof output channels is 64 for each dilated convolutional layer (128 filters in total due to gated non-linearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these modelsfor about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzardexperiment to 0.0001 after around 3 days of training.3.2 H UMAN EVALUATIONApart from reporting NLL, we conducted AB preference tests for random samples from four modelstrained on the Blizzard dataset. For unconditional generation of speech which at best sounds likemumbling, this type of test is the one which is more suited. Competing models were the RNN,SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of themodels were excluded as the quality of samples were definitely lower and also to keep the numberof pair comparison tests manageable. We will release the samples that have been used in this testtoo.All the samples were set to have the same volume. Every user is then shown a set of twenty pairsof samples with one random pair at a time. Each pair had samples from two different models. Thehuman evaluator is asked to listen to the samples and had the option of choosing between the twomodel or choosing not to prefer any of them. Hence, we have a quantification of preference betweenevery pair of models. We used the online tool made publicly available by Jillings et al. (2015).Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in termsof preference by human raters, then SampleRNN (2-tier) and afterward two other models, whichmatches with the performance comparison in Table 1.The same evaluation was conducted for Music dataset except for an additional filtering process ofsamples. Specific to only this dataset, we observed that a batch of generated samples from competingmodels (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were eithermusic-like or random noise. For all these models we only considered random samples that were notrandom noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.7Published as a conference paper at ICLR 201779.0 18.0 3.0020406080100Preference percentage2-tierRNNNo-Pref.84.2 8.9 6.90204060801003-tierRNN No-Pref.22.4 63.3 14.3020406080100WaveN.RNNNo-Pref.84.8 10.1 5.1020406080100Preference percentage3-tier2-tierNo-Pref.60.2 32.0 7.80204060801002-tierWaveN.No-Pref.89.0 7.0 4.00204060801003-tierWaveN.No-Pref.Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted onsamples generated from models trained on Blizzard dataset.85.1 2.3 12.60204060801002-tierRNNNo-Pref.83.5 4.7 11.80204060801003-tierRNNNo-Pref.32.6 57.0 10.5020406080100Preference percentage3-tier2-tierNo-Pref.Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted onsamples generated from models trained on Music dataset.3.3 Q UANTIFYING INFORMATION RETENTIONFor the last experiment we are interested in measuring the memory span of the model. We trainedour model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers readingaudio books, one male and one female, respectively, with mean fundamental frequency of 125.3and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessedsimilar to Blizzard. We observed that it learned to stay consistent generating samples from the samespeaker without having any knowledge about the speaker ID or any other conditioning information.This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimesmixes two different categories of sounds.Another experiment was conducted to test the effect of memory and study the effective memoryhorizon. We inject 1 second of silence in the middle of sampling procedure in order to see if itwill remember to generate from the same speaker or not. Initially when sampling we let the modelgenerate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back thegenerated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 secondsagain we sample normally; feeding back the generated token.We did classification based on mean fundamental frequency of speakers for the first and last 2seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.8Published as a conference paper at ICLR 2017This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silenttokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50%chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).4 R ELATED WORKOur work is related to earlier work on auto-regressive multi-layer neural networks, startingwith Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix-elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over unitsof the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi-tion, we transform the joint distribution of acoustic samples using Eq. 1.The idea of having part of the model running at different clock rates is related to multi-scaleRNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015;Serban et al., 2016).Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditionalapproaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Leeet al. (2009).Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the abovecomparisons, and makes it interesting to compare the effect of adding higher-level RNN stagesworking at a low resolution. Similar to this work, our models generate one acoustic sample at a timeconditioned on all previously generated samples. We also share the preprocessing step of quantizingthe acoustics into bins. Unlike this model, we have different modules in our models running atdifferent clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependencywith hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states tothe next training sequence although the gradient of the loss will not take into account the samples inprevious training sequence.5 D ISCUSSION AND CONCLUSIONWe propose a novel model that can address unconditional audio generation in the raw acousticdomain, which typically has been done until recently with hand-crafted features. We are able toshow that a hierarchy of time scales and frequent updates will help to overcome the problem ofmodeling extremely high-resolution temporal data. That allows us, for this particular application, tolearn the data manifold directly from audio samples. We show that this model can generalize welland generate samples on three datasets that are different in nature. We also show that the samplesgenerated by this model are preferred by human raters.Success in this application, with a general-purpose solution as proposed here, opens up room formore improvement when specific domain knowledge is applied. This method, however, proposedwith audio generation application in mind, can easily be adapted to other tasks that require learningthe representation of sequential data with high temporal resolution and long-range complex struc-ture.ACKNOWLEDGMENTSThe authors would like to thank Jo ̃ao Felipe Santos and Kyle Kastner for insightful comments anddiscussion. We would like to thank the Theano Development Team (2016)4and MILA staff. Weacknowledge the support of the following agencies for research funding and computing support:NSERC, Calcul Qu ́ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Soteloalso thanks the Consejo Nacional de Ciencia y Tecnolog ́ıa (CONACyT) as well as the Secretar ́ıa deEducaci ́on P ́ublica (SEP) for their support. This work was a collaboration with Ubisoft.4http://deeplearning.net/software/theano/9Published as a conference paper at ICLR 2017REFERENCESYoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neuralnetworks. In NIPS , volume 99, pp. 400–406, 1999.James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal ofMachine Learning Research , 13(Feb):281–305, 2012.Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditoryfilter banks using non-negative matrix factorisation. In 2008 IEEE International Conference onAcoustics, Speech and Signal Processing , pp. 4713–4716. IEEE, 2008.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben-gio. A recurrent latent variable model for sequential data. In Advances in neural informationprocessing systems , pp. 2980–2988, 2015.Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener-ate chairs, tables and cars with convolutional networks. 2016.Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen-cies. In NIPS , volume 400, pp. 409. Citeseer, 1995.Felix Gers. Long short-term memory in recurrent neural networks . PhD thesis, Universit ̈at Han-nover, 2001.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprintarXiv:1308.0850 , 2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 1026–1034, 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss. Web Audio Evaluation Tool:A browser-based listening test environment. In 12th Sound and Music Computing Conference ,July 2015.Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathyblog, 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXivpreprint arXiv:1402.3511 , 2014.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning foraudio classification using convolutional deep belief networks. In Advances in neural informationprocessing systems , pp. 1096–1104, 2009.Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla,P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013–indian language task. In Blizzard Challenge Workshop 2013 , 2013.10Published as a conference paper at ICLR 2017Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac-celerate training of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.J ̈urgen Schmidhuber. Learning complex, extended sequences using the principle of history com-pression. Neural Computation , 4(2):234–242, 1992.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu-tation , pp. 153–164. Springer, 1999.Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, andJian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug-gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl-edge Management , pp. 553–562. ACM, 2015.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688 .Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and KeiichiroOura. Speech synthesis based on hidden markov models. Proceedings of the IEEE , 101(5):1234–1252, 2013.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.arXiv preprint arXiv:1601.06759 , 2016.Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXivpreprint arXiv:1511.07122 , 2015.Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.APPENDIX AAMODEL VARIANT : SAMPLE RNN-W AVENETHYBRIDSampleRNN-WaveNet model has two modules operating at two different clock-rate. The slowerclock-rate module (frame-level module) sees one frame (each of which has size FS) at a time whilethe faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. theratio of clock-rates for these two modules would be the size of a single frame. Number of sequentialsteps for frame-level component would be FStimes lower. We repeat the output of each step offrame-level component FStimes so that number of time-steps for output of both the componentsmatch. The output of both these modules are concatenated for every time-step which is furtheroperated by non-linearities for every time-step independently before generating the final output.In our experiments, we kept size of a single frame ( FS) to be 128. We tried two variants of thismodel: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet,both modules described above are implemented using dilated convolutions as described in originalWaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to modelthe dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive fieldof size 509 samples from the past.Although these models are designed with the intention of combining the two models to harness theirbest features, preliminary experiments show that this variant is not meeting our expectations at themoment which directs us to a possible future work.11
Hy9glU2Xg
SkxKPDv5xl
ICLR.cc/2017/conference/-/paper393/official/review
{"title": "Promising work, paper lacking details", "rating": "8: Top 50% of accepted papers, clear accept", "review": "Pros:\nThe authors are presenting an RNN-based alternative to wavenet, for generating audio a sample at a time.\nRNNs are a natural candidate for this task so this is an interesting alternative. Furthermore the authors claim to make significant improvement in the quality of the produces samples.\nAnother novelty here is that they use a quantitative likelihood-based measure to assess them model, in addition to the AB human comparisons used in the wavenet work.\n\nCons:\nThe paper is lacking equations that detail the model. This can be remedied in the camera-ready version.\nThe paper is lacking detailed explanations of the modeling choices:\n- It's not clear why an MLP is used in the bottom layer instead of (another) RNN.\n- It's not clear why r linear projections are used for up-sampling, instead of feeding the same state to all r samples, or use a more powerful type of transformation. \nAs the authors admit, their wavenet implementation is probably not as good as the original one, which makes the comparisons questionable. \n\nDespite the cons and given that more modeling details are provided, I think this paper will be a valuable contribution. \n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
["Soroush Mehri", "Kundan Kumar", "Ishaan Gulrajani", "Rithesh Kumar", "Shubham Jain", "Jose Sotelo", "Aaron Courville", "Yoshua Bengio"]
In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
["Speech", "Deep learning", "Unsupervised Learning", "Applications"]
https://openreview.net/forum?id=SkxKPDv5xl
https://openreview.net/pdf?id=SkxKPDv5xl
https://openreview.net/forum?id=SkxKPDv5xl&noteId=Hy9glU2Xg
Published as a conference paper at ICLR 2017SAMPLE RNN: A NUNCONDITIONAL END-TO-ENDNEURAL AUDIO GENERATION MODELSoroush MehriUniversity of MontrealKundan KumarIIT KanpurIshaan GulrajaniUniversity of MontrealRithesh KumarSSNCEShubham JainIIT KanpurJose SoteloUniversity of MontrealAaron CourvilleUniversity of MontrealCIFAR FellowYoshua BengioUniversity of MontrealCIFAR Senior FellowABSTRACTIn this paper we propose a novel model for unconditional audio generation basedon generating one audio sample at a time. We show that our model, which profitsfrom combining memory-less modules, namely autoregressive multilayer percep-trons, and stateful recurrent neural networks in a hierarchical structure is able tocapture underlying sources of variations in the temporal sequences over very longtime spans, on three datasets of different nature. Human evaluation on the gener-ated samples indicate that our model is preferred over competing models. We alsoshow how each component of the model contributes to the exhibited performance.1 I NTRODUCTIONAudio generation is a challenging task at the core of many problems of interest, such as text-to-speech synthesis, music synthesis and voice conversion. The particular difficulty of audio generationis that there is often a very large discrepancy between the dimensionality of the the raw audio signaland that of the effective semantic-level signal. Consider the task of speech synthesis, where we aretypically interested in generating utterances corresponding to full sentences. Even at a relatively lowsample rate of 16kHz, on average we will have 6,000 samples per word generated.1Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it intospectral or hand-engineered features and defining the generative model over these features. However,when the generated signal is eventually decompressed into audio waveforms, the sample quality isoften degraded and requires extensive domain-expert corrective measures. This results in compli-cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a stepin the direction of replacing these handcrafted systems.In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependenciesin audio data. We believe RNNs are well suited as they have been designed and are suited solutionsfor these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practiceit is a known problem of these models to not scale well at such a high temporal resolution as is foundwhen generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one ofthe reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu &Koltun (2015) to show extremely good performance.In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presentedwhile keeping all the computations tractable.2Since our model has different modules operatingat different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating theamount of computational resources in modeling different levels of abstraction. In particular, wecan potentially allocate very limited resource to the module responsible for sample level alignments1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes.dlugan.com/speaking-rate/2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https://soundcloud.com/samplernn/sets1Published as a conference paper at ICLR 2017operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resourcesin modeling dependencies which vary very slowly in audio, for example identity of phoneme beingspoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies atmultiple levels of abstraction.Hence, our contribution is threefold:1. We present a novel method that utilizes RNNs at different scales to model longer term de-pendencies in audio waveforms while training on short sequences which results in memoryefficiency during training.2. We extensively explore and compare variants of models achieving the above effect.3. We study and empirically evaluate the impact of different components of our model onthree audio datasets. Human evaluation also has been conducted to test these generativemodels.2 S AMPLE RNN M ODELIn this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms.SampleRNN models the probability of a sequence of waveform samples X=fx1;x2;:::;xTg(a random variable over input data sequences) as the product of the probabilities of each sampleconditioned on all previous samples:p(X) =T1Yi=0p(xi+1jx1;:::;xi) (1)RNNs are commonly used to model sequential data which can be formulated as:ht=H(ht1;xi=t) (2)p(xi+1jx1;:::;xi) =Softmax (MLP (ht)) (3)withHbeing one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014),Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep varia-tions (Section 3). However, raw audio signals are challenging to model because they contain struc-ture at very different scales: correlations exist between neighboring samples as well as between onesthousands of samples apart.SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at adifferent temporal resolution. The lowest module processes individual samples, and each highermodule operates on an increasingly longer timescale and a lower temporal resolution. Each moduleconditions the module below it, with the lowest module outputting sample-level predictions. Theentire hierarchy is trained jointly end-to-end by backpropagation.2.1 F RAME -LEVEL MODULESRather than operating on individual samples, the higher-level modules in SampleRNN operate onnon-overlapping frames ofFS(k)(“Frame Size”) samples at the kthlevel up in the hierarchy at atime (frames denoted by f(k)). Each frame-level module is a deep RNN which summarizes thehistory of its inputs into a conditioning vector for the next module downward.The variable number of frames we condition upon up to timestep t1is expressed by a fixed lengthhidden state or memory h(k)twheretis related to clock rate at that tier. The RNN makes a memoryupdate at timestep tas a function of the previous memory h(k)t1and an input inp(k)t. This input fortop tierk=Kis simply the input frame. For intermediate tiers ( 1<k <K ) this input is a linearcombination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.Because different modules operate at different temporal resolutions, we need to upsample eachvectorcat the output of a module into a series of r(k)vectors (where r(k)is the ratio between thetemporal resolutions of the modules) before feeding it into the input of the next module downward(Eq. 6). We do this with a set of r(k)separate linear projections.2Published as a conference paper at ICLR 2017Figure 1: Snapshot of the unrolled model at timestep iwithK= 3 tiers. As a simplification onlyone RNN and up-sampling ratio r= 4is used for all tiers.Here we are formalizing the frame-level module in tier k. Note that following equations are exclusiveto tierkand timestep tfor that specific tier. To increase the readability, unless necessary superscript(k)is not shown for t,inp(k),W(k)x,h(k),H(k),W(k)j, andr(k).inpt=(Wxf(k)t+c(k+1)t; 1<k<Kf(k=K)t ; k=K(4)ht=H(ht1;inpt) (5)c(k)(t1)r+j=Wjht; 1jr (6)Our approach of upsampling with r(k)linear projections is exactly equivalent to upsampling byadding zeros and then applying a linear convolution. This is sometimes called “perforated” upsam-pling in the context of convolutional neural networks (CNNs). It was first demonstrated to workwell in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.2.2 S AMPLE -LEVEL MODULEThe lowest module (tier k= 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution overa samplexi+1, conditioned on the FS(1)preceding samples as well as a vector c(k=2)i from thenext higher module which encodes information about the sequence prior to that frame. As FS(1)isusually a small value and correlations in nearby samples are easy to model by a simple memorylessmodule, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speedsup the training. Assuming eirepresentsxiafter passing through embedding layer (section 2.2.1),conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutivesample-level frames are shown. In addition, Wxin Eq. 8 is simply used to linearly combine a frameand conditioning vector from above.f(1)i1=flatten ([eiFS(1);:::;ei1]) (7)f(1)i=flatten ([eiFS(1)+1;:::;ei])inp(1)i=W(1)xf(1)i+c(2)i (8)p(xi+1jx1;:::;xi) =Softmax (MLP (inp(1)i)) (9)We use a Softmax because we found that better results were obtained by discretizing the audiosignals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather thanusing a Gaussian or Gaussian mixture to represent the conditional density of the original real-valuedsignal. When processing an audio sequence, the MLP is convolved over the sequence, processing3Published as a conference paper at ICLR 2017each window of FS(1)samples and predicting the next sample. At generation time, the MLP is runrepeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baselinemodel RNN and this model, suggesting that the proposed hierarchically structured architecture ofSampleRNN makes a big difference.2.2.1 O UTPUT QUANTIZATIONThe sample-level module models its output as a q-way discrete distribution over possible quantizedvalues ofxi(that is, the output layer of the MLP is a q-way Softmax).To demonstrate the importance of a discrete output distribution, we apply the same architecture onreal-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) outputdistribution. Table 2 shows that our model outperforms an RNN baseline even when both modelsuse real-valued outputs. However, samples from the real-valued model are almost indistinguishablefrom random noise.In this work we use linear quantization with q= 256 , corresponding to a per-sample bit depth of 8.Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam-ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonablequality and are artifact-free.In addition, early on we noticed that the model can achieve better performance and generation qualitywhen we embed the quantized input values before passing them through the sample-level MLP (seeTable 4). The embedding steps maps each of the qdiscrete values to a real-valued vector embedding.However, real-valued raw samples are still used as input to the higher modules.2.2.2 C ONDITIONALLY INDEPENDENT SAMPLE OUTPUTSTo demonstrate the importance of a sample-level autoregressive module, we try replacing it with“Multi-Softmax” (see Table 4), where the prediction of each sample xidepends only on the con-ditioning vector cfrom Eq. 9. In this configuration, the model outputs an entire frame ofFS(1)samples at a time, modeling all samples in a frame as conditionally independent of each other. Wefind that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig-nificantly worse in terms of log-likelihood and fails to generate convincing samples. This suggeststhat modeling the joint distribution of the acoustic samples inside each frame is very important inorder to obtain good acoustic generation. We found this to be true even when the frame size is re-duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at atime.2.3 T RUNCATED BPTTTraining recurrent neural networks on long sequences can be very computationally expensive. Oordet al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con-nections. However, when they can be trained efficiently, recurrent networks have been shown to bevery powerful and expressive sequence models. We enable efficient training of our recurrent modelusing truncated backpropagation through time , splitting each sequence into short subsequences andpropagating gradients only to the beginning of each subsequence. We experiment with differentsubsequence lengths and demonstrate that we are able to train our networks, which model verylong-term dependencies, despite backpropagating through relatively short subsequences.Table 3 shows that by increasing the subsequence length, performance substantially increases along-side with train-time memory usage and convergence time. Yet it is noteworthy that our best modelshave been trained on subsequences of length 512, which corresponds to 32 milliseconds, a smallfraction of the length of a single a phoneme of human speech while generated samples exhibitlonger word-like structures.Despite the aforementioned fact, this generative model can mimic the existing long-term structureof the data which results in more natural and coherent samples that is preferred by human listeners.(More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specializedframe-level modules (Section 2.1) with top tiers designed to model a lower resolution of signalwhile leaving the process of filling the details to lower tiers.4Published as a conference paper at ICLR 20173 E XPERIMENTS AND RESULTSIn this section we are introducing three datasets which have been chosen to evaluate the proposedarchitecture for modeling raw acoustic sequences. The description of each dataset and their prepro-cessing is as follows:Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task,contains 315 hours of a single female voice actor in English; however, for our experimentswe are using only 20.5 hours. The training/validation/test split is 86%-7%-7%.Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, ishuman vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di-versity of sound type and the fact that these sounds were recorded from 51 actors and manycategories makes it a challenging task. To add to that, this data is extremely unbalanced.The training/validation/test split is 92%-4%-4%.Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available onhttps://archive.org/ amounting to 10 hours of non-vocal audio. The training/val-idation/test split is 88%-6%-6%.See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For allthe datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Musicdatasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long se-quences on which we will perform truncated backpropagation through time. Each sequence in theOnomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models onthis dataset, zero-padding has been applied to make all the sequences in a mini-batch have the samelength and corresponding cost values (for the predictions over the added 0s) would be ignored whencomputing the gradients.We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs,the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015)and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4components), normalization is applied per audio sample with the global mean and standard deviationobtained from the train split. For most of our experiments where the model demands discrete input,binning was applied per audio sample.All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra-dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma& Ba, 2014) ( 1= 0:9,2= 0:999, and= 1e8) with an initial learning rate of 0.001 wasused to adjust the parameters. For training each model, random search over hyper-parameter val-ues (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based modelswas always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all thelinear layers in the model (except for the embedding layer) to accelerate the training procedure. Sizeof the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weightmatrices used for hidden-to-hidden connections and other weight matrices initialized similar to Heet al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was thethe number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) andMLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for firsttwo layers and 256 for the final layer before softmax). Also FS(1)=FS(2)= 2 andFS(3)= 8were found to result in lowest NLL.3.1 W AVENETRE-IMPLEMENTATIONWe implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we wouldhave liked to replicate their model exactly but owing to missing details of architecture and hyper-parameters, as well as limited compute power at our disposal, we made our own design choices sothat the model would fit on a single GPU while having a receptive field of around 250 milliseconds,3Courtesy of Ubisoft5Published as a conference paper at ICLR 2017Real dataBlizzard Onomatopoeia MusicSampleRNN(2-tier)SampleRNN(3-tier) Real dataSampleRNN(2-tier)SampleRNN(3-tier)Figure 2: Examples from the datasets compared to samples from our models. In the first 3 rows, 2seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1and 4 are ground truth from which one can see how the datasets look different and have complexstructure in low resolution which the frame-level component of the SampleRNN is designed tocapture. Samples also to some extent mimic the same global structure. At the same time, zoomed-insamples of our model shows that it can perfectly resemble the high resolution structure present inthe data as well.Table 1: Test NLL in bits for three presented datasets.Model Blizzard Onomatopoeia MusicRNN (Eq. 2) 1.434 2.034 1.410WaveNet (re-impl.) 1.480 2.285 1.464SampleRNN (2-tier) 1.392 2.026 1.076SampleRNN (3-tier) 1.387 1.990 1.159Table 2: Average NLL on Blizzard test set for real-valued models.Model Average Test NLLRNN-GMM -2.415SampleRNN-GMM (2-tier) -2.7826Published as a conference paper at ICLR 2017Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzardvalidation set.Subsequence Length 32 64 128 256 512NLL Validation 1.575 1.468 1.412 1.391 1.364Table 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN areprovided to compare the contribution of each component in performance.Model NLL Test (Validation)SampleRNN (2-tier) 1.392 (1.369)Without Embedding 1.566 (1.539)Multi-Softmax 1.685 (1.656)while having a reasonable number of updates per unit time. Although our model is very similar toWaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer,length of target sequence to train on simultaneously (one can train with a single target with all sam-ples in the receptive field as input or with target sequence length of size T with input of size receptivefield + T - 1), batch-size, etc. might make our implementation different from what the authors havedone in the original WaveNet model. Hence, we note here that although we did our best at exactlyreproducing their results, there would very likely be different choice of hyper-parameters betweenour implementation and the one of the authors.For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilatedconvolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive fieldof 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time stept,p(xi) =f(xi1;xi2;:::xi4092)whereis model parameters. We train on target sequencelength of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the numberof output channels is 64 for each dilated convolutional layer (128 filters in total due to gated non-linearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these modelsfor about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzardexperiment to 0.0001 after around 3 days of training.3.2 H UMAN EVALUATIONApart from reporting NLL, we conducted AB preference tests for random samples from four modelstrained on the Blizzard dataset. For unconditional generation of speech which at best sounds likemumbling, this type of test is the one which is more suited. Competing models were the RNN,SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of themodels were excluded as the quality of samples were definitely lower and also to keep the numberof pair comparison tests manageable. We will release the samples that have been used in this testtoo.All the samples were set to have the same volume. Every user is then shown a set of twenty pairsof samples with one random pair at a time. Each pair had samples from two different models. Thehuman evaluator is asked to listen to the samples and had the option of choosing between the twomodel or choosing not to prefer any of them. Hence, we have a quantification of preference betweenevery pair of models. We used the online tool made publicly available by Jillings et al. (2015).Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in termsof preference by human raters, then SampleRNN (2-tier) and afterward two other models, whichmatches with the performance comparison in Table 1.The same evaluation was conducted for Music dataset except for an additional filtering process ofsamples. Specific to only this dataset, we observed that a batch of generated samples from competingmodels (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were eithermusic-like or random noise. For all these models we only considered random samples that were notrandom noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.7Published as a conference paper at ICLR 201779.0 18.0 3.0020406080100Preference percentage2-tierRNNNo-Pref.84.2 8.9 6.90204060801003-tierRNN No-Pref.22.4 63.3 14.3020406080100WaveN.RNNNo-Pref.84.8 10.1 5.1020406080100Preference percentage3-tier2-tierNo-Pref.60.2 32.0 7.80204060801002-tierWaveN.No-Pref.89.0 7.0 4.00204060801003-tierWaveN.No-Pref.Figure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted onsamples generated from models trained on Blizzard dataset.85.1 2.3 12.60204060801002-tierRNNNo-Pref.83.5 4.7 11.80204060801003-tierRNNNo-Pref.32.6 57.0 10.5020406080100Preference percentage3-tier2-tierNo-Pref.Figure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted onsamples generated from models trained on Music dataset.3.3 Q UANTIFYING INFORMATION RETENTIONFor the last experiment we are interested in measuring the memory span of the model. We trainedour model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers readingaudio books, one male and one female, respectively, with mean fundamental frequency of 125.3and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessedsimilar to Blizzard. We observed that it learned to stay consistent generating samples from the samespeaker without having any knowledge about the speaker ID or any other conditioning information.This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimesmixes two different categories of sounds.Another experiment was conducted to test the effect of memory and study the effective memoryhorizon. We inject 1 second of silence in the middle of sampling procedure in order to see if itwill remember to generate from the same speaker or not. Initially when sampling we let the modelgenerate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back thegenerated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 secondsagain we sample normally; feeding back the generated token.We did classification based on mean fundamental frequency of speakers for the first and last 2seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.8Published as a conference paper at ICLR 2017This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silenttokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50%chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).4 R ELATED WORKOur work is related to earlier work on auto-regressive multi-layer neural networks, startingwith Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently Pix-elRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over unitsof the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi-tion, we transform the joint distribution of acoustic samples using Eq. 1.The idea of having part of the model running at different clock rates is related to multi-scaleRNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015;Serban et al., 2016).Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditionalapproaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Leeet al. (2009).Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the abovecomparisons, and makes it interesting to compare the effect of adding higher-level RNN stagesworking at a low resolution. Similar to this work, our models generate one acoustic sample at a timeconditioned on all previously generated samples. We also share the preprocessing step of quantizingthe acoustics into bins. Unlike this model, we have different modules in our models running atdifferent clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependencywith hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states tothe next training sequence although the gradient of the loss will not take into account the samples inprevious training sequence.5 D ISCUSSION AND CONCLUSIONWe propose a novel model that can address unconditional audio generation in the raw acousticdomain, which typically has been done until recently with hand-crafted features. We are able toshow that a hierarchy of time scales and frequent updates will help to overcome the problem ofmodeling extremely high-resolution temporal data. That allows us, for this particular application, tolearn the data manifold directly from audio samples. We show that this model can generalize welland generate samples on three datasets that are different in nature. We also show that the samplesgenerated by this model are preferred by human raters.Success in this application, with a general-purpose solution as proposed here, opens up room formore improvement when specific domain knowledge is applied. This method, however, proposedwith audio generation application in mind, can easily be adapted to other tasks that require learningthe representation of sequential data with high temporal resolution and long-range complex struc-ture.ACKNOWLEDGMENTSThe authors would like to thank Jo ̃ao Felipe Santos and Kyle Kastner for insightful comments anddiscussion. We would like to thank the Theano Development Team (2016)4and MILA staff. Weacknowledge the support of the following agencies for research funding and computing support:NSERC, Calcul Qu ́ebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Soteloalso thanks the Consejo Nacional de Ciencia y Tecnolog ́ıa (CONACyT) as well as the Secretar ́ıa deEducaci ́on P ́ublica (SEP) for their support. This work was a collaboration with Ubisoft.4http://deeplearning.net/software/theano/9Published as a conference paper at ICLR 2017REFERENCESYoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neuralnetworks. In NIPS , volume 99, pp. 400–406, 1999.James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal ofMachine Learning Research , 13(Feb):281–305, 2012.Alexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditoryfilter banks using non-negative matrix factorisation. In 2008 IEEE International Conference onAcoustics, Speech and Signal Processing , pp. 4713–4716. IEEE, 2008.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation ofgated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben-gio. A recurrent latent variable model for sequential data. In Advances in neural informationprocessing systems , pp. 2980–2988, 2015.Alexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener-ate chairs, tables and cars with convolutional networks. 2016.Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen-cies. In NIPS , volume 400, pp. 409. Citeseer, 1995.Felix Gers. Long short-term memory in recurrent neural networks . PhD thesis, Universit ̈at Han-nover, 2001.Alex Graves. Generating sequences with recurrent neural networks. arXiv preprintarXiv:1308.0850 , 2013.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassinghuman-level performance on imagenet classification. In Proceedings of the IEEE InternationalConference on Computer Vision , pp. 1026–1034, 2015.Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.Nicholas Jillings, David Moffat, Brecht De Man, and Joshua D. Reiss. Web Audio Evaluation Tool:A browser-based listening test environment. In 12th Sound and Music Computing Conference ,July 2015.Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. Andrej Karpathyblog, 2015.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXivpreprint arXiv:1402.3511 , 2014.Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS ,volume 1, pp. 2, 2011.Honglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning foraudio classification using convolutional deep belief networks. In Advances in neural informationprocessing systems , pp. 1096–1104, 2009.Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves,Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model forraw audio. arXiv preprint arXiv:1609.03499 , 2016.Kishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla,P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013–indian language task. In Blizzard Challenge Workshop 2013 , 2013.10Published as a conference paper at ICLR 2017Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac-celerate training of deep neural networks. arXiv preprint arXiv:1602.07868 , 2016.J ̈urgen Schmidhuber. Learning complex, extended sequences using the principle of history com-pression. Neural Computation , 4(2):234–242, 1992.Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Buildingend-to-end dialogue systems using generative hierarchical neural network models. In Proceedingsof the 30th AAAI Conference on Artificial Intelligence (AAAI-16) , 2016.Hava T Siegelmann. Computation beyond the turing limit. In Neural Networks and Analog Compu-tation , pp. 153–164. Springer, 1999.Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, andJian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug-gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl-edge Management , pp. 553–562. ACM, 2015.Theano Development Team. Theano: A Python framework for fast computation of mathematicalexpressions. arXiv e-prints , abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688 .Keiichi Tokuda, Yoshihiko Nankaku, Tomoki Toda, Heiga Zen, Junichi Yamagishi, and KeiichiroOura. Speech synthesis based on hidden markov models. Proceedings of the IEEE , 101(5):1234–1252, 2013.Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks.arXiv preprint arXiv:1601.06759 , 2016.Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXivpreprint arXiv:1511.07122 , 2015.Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.APPENDIX AAMODEL VARIANT : SAMPLE RNN-W AVENETHYBRIDSampleRNN-WaveNet model has two modules operating at two different clock-rate. The slowerclock-rate module (frame-level module) sees one frame (each of which has size FS) at a time whilethe faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. theratio of clock-rates for these two modules would be the size of a single frame. Number of sequentialsteps for frame-level component would be FStimes lower. We repeat the output of each step offrame-level component FStimes so that number of time-steps for output of both the componentsmatch. The output of both these modules are concatenated for every time-step which is furtheroperated by non-linearities for every time-step independently before generating the final output.In our experiments, we kept size of a single frame ( FS) to be 128. We tried two variants of thismodel: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet,both modules described above are implemented using dilated convolutions as described in originalWaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to modelthe dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive fieldof size 509 samples from the past.Although these models are designed with the intention of combining the two models to harness theirbest features, preliminary experiments show that this variant is not meeting our expectations at themoment which directs us to a possible future work.11
Bkgf7ZeM4e
B1G9tvcgx
ICLR.cc/2017/conference/-/paper412/official/review
{"title": "There are major issues", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes an approach to the task of multimodal machine translation, namely to the case when an image is available that corresponds to both source and target sentences. \n\nThe idea seems to be to use a latent variable model and condition it on the image. In practice from Equation 3 and Figure 3 one can see that the image is only used during training to do inference. That said, the approach appears flawed, because the image is not really used for translation.\n\nExperimental results are weak. If the model selection was done properly, that is using the validation set, the considered model would only bring 0.6 METEOR and 0.2 BLEU advantage over the baseline. In the view of the overall variance of the results, these improvements can not be considered significant. \n\nThe qualitative analysis in Subsection 4.4 appears inconclusive and unconvincing.\n\nOverall, there are major issues with both the approach and the execution of the paper.\n\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Neural Machine Translation with Latent Semantic of Image and Text
["Joji Toyama", "Masanori Misono", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo"]
Although attention-based Neural Machine Translation have achieved great success, attention-mechanism cannot capture the entire meaning of the source sentence because the attention mechanism generates a target word depending heavily on the relevant parts of the source sentence. The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation. We follow this approach and we believe that the capturing meaning of sentence benefits from image information because human beings understand the meaning of language not only from textual information but also from perceptual information such as that gained from vision. As described herein, we propose a neural machine translation model that introduces a continuous latent variable containing an underlying semantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conducted with an English–German translation task show that our model outperforms over the baseline.
["neural machine translation", "latent semantic", "image", "entire meaning", "source sentence", "meaning", "image information", "model", "text"]
https://openreview.net/forum?id=B1G9tvcgx
https://openreview.net/pdf?id=B1G9tvcgx
https://openreview.net/forum?id=B1G9tvcgx&noteId=Bkgf7ZeM4e
Under review as a conference paper at ICLR 2017NEURAL MACHINE TRANSLATION WITH LATENT SE-MANTIC OF IMAGE AND TEXTJoji Toyama, Masanori Misonoy, Masahiro Suzuki, Kotaro Nakayama & Yutaka MatsuoGraduate School of Engineering,yGraduate School of Information Science and TechnologyThe University of TokyoHongo, Tokyo, Japanftoyama,misono,masa,k-nakayama,matsuo g@weblab.t.u-tokyo.ac.jpABSTRACTAlthough attention-based Neural Machine Translation have achieved great suc-cess, attention-mechanism cannot capture the entire meaning of the source sen-tence because the attention mechanism generates a target word depending heavilyon the relevant parts of the source sentence. The report of earlier studies has in-troduced a latent variable to capture the entire meaning of sentence and achievedimprovement on attention-based Neural Machine Translation. We follow this ap-proach and we believe that the capturing meaning of sentence benefits from im-age information because human beings understand the meaning of language notonly from textual information but also from perceptual information such as thatgained from vision. As described herein, we propose a neural machine transla-tion model that introduces a continuous latent variable containing an underlyingsemantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conductedwith an English–German translation task show that our model outperforms overthe baseline.1 I NTRODUCTIONNeural machine translation (NMT) has achieved great success in recent years ( Sutskever et al. ,2014 ;Bahdanau et al. ,2015 ). In contrast to statistical machine translation, which requires huge phrase andrule tables, NMT requires much less memory. However, the most standard model, NMT with at-tention ( Bahdanau et al. ,2015 ) entails the shortcoming that the attention mechanism cannot capturethe entire meaning of a sentence because it generates a target word while depending heavily onthe relevant parts of the source sentence ( Tu et al. ,2016 ). To overcome this problem, VariationalNeural Machine Translation (VNMT), which outperforms NMT with attention introduces a latentvariable to capture the underlying semantic from source and target ( Zhang et al. ,2016 ). We followthe motivation of VNMT, which is to capture underlying semantic of a source.Image information is related to language. For example, we human beings understand the meaningof language by linking perceptual information given by the surrounding environment and language(Barsalou ,1999 ). Although it is natural and easy for humans, it is difficult for computers to un-derstand different domain’s information integrally. Solving this difficult task might, however, bringgreat improvements in natural language processing. Several researchers have attempted to link lan-guage and images such as image captioning by Xu et al. (2015 ) or image generation from sentencesbyReed et al. (2016 ). They described the possibility of integral understanding of images and text. Inmachine translation, we can expect an improvement using not only text information but also imageinformation because image information can bridge two languages.As described herein, we propose the neural machine translation model which introduces a latentvariable containing an underlying semantic extracted from texts and images. Our model includes anexplicit latent variable z, which has underlying semantics extracted from text and images by intro-ducing a Variational Autoencoder (V AE) ( Kingma et al. ,2014 ;Rezende et al. ,2014 ). Our model,First two authors contributed equally.1Under review as a conference paper at ICLR 2017h"h"h#h$h%h#h$h%h"&h"&h#&h$&h#&h$&'(#("($(%)#)")$h*&h*+log/#010"0#0$)#)")$h2&h2 Figure 1: Architecture of Proposed Model.Green dotted lines denote that and encoded yare used only when training.which can be trained end-to-end, requires image information only when training. As describedherein, we tackle the task with which one uses a parallel corpus and images in training, while usinga source corpus in translating. It is important to define the task in this manner because we rarelyhave a corresponding image when we want to translate a sentence. During translation, our modelgenerates a semantic variable zfrom a source, integrates variable zinto a decoder of neural machinetranslation system, and then finally generates the translation. The difference between our model andVNMT is that we use image information in addition to text information.For experiments, we used Multi30k ( Elliott et al. ,2016 ), which includes images and the correspond-ing parallel corpora of English and German. Our model outperforms the baseline with two evaluationmetrics: METEOR ( Denkowski & Lavie ,2014 ) and BLEU ( Papineni et al. ,2002 ). Moreover, weobtain some knowledge related to our model and Multi30k. Finally, we present some examples inwhich our model either improved, or worsened, the result.Our paper contributes to the neural machine translation research community in three ways.We present the first neural machine translation model to introduce a latent variable inferredfrom image and text information. We also present the first translation task with which oneuses a parallel corpus and images in training, while using a source corpus in translating.Our translation model can generate more accurate translation by training with images, es-pecially for short sentences.We present how the translation of source is changed by adding image information comparedto VNMT which does not use image information.2 B ACKGROUNDOur model is the extension of Variational Neural Machine Translation (VNMT) ( Zhang et al. ,2016 ).Our model is also viewed as one of the multimodal translation models. In our model, V AE is usedto introduce a latent variable. We describe the background of our model in this section.2.1 V ARIATIONAL NEURAL MACHINE TRANSLATIONThe VNMT translation model introduces a latent variable. This model’s architecture shown in Figure1excludes the arrow from . This model involves three parts: encoder, inferrer, and decoder. Inthe encoder, both the source and target are encoded by bidirectional-Recurrent Neural Networks(bidirectional-RNN) and a semantic representation is generated. In the inferrer, a latent variable zis2Under review as a conference paper at ICLR 2017modeled from a semantic representation by introducing V AE. In the decoder, a latent variable zisintegrated in the Gated Recurrent Unit (GRU) decoder; also, a translation is generated.Our model is followed by architecture, except that the image is also encoded to obtain a latentvariable z.2.2 M ULTIMODAL TRANSLATIONMultimodal Translation is the task with which one might one can use a parallel corpus and images.The first papers to study multimodal translation are Elliott et al. (2015 ) and Hitschler & Riezler(2016 ). It was selected as a shared task in Workshop of Machine Translation 2016 (WMT161). Al-though several studies have been conducted ( Caglayan et al. ,2016 ;Huang et al. ,2016 ;Calixto et al. ,2016 ;Libovick ́y et al. ,2016 ;Rodr ́ıguez Guasch & Costa-juss `a,2016 ;Shah et al. ,2016 ), they do notshow great improvement, especially in neural machine translation ( Specia et al. ,2016 ). Here, we in-troduce end-to-end neural network translation models like our model.Caglayan et al. (2016 ) integrate an image into an NMT decoder. They simply put source contextvectors and image feature vectors extracted from ResNet-50’s ‘res4f relu’ layer ( He et al. ,2016 )into the decoder called multimodal conditional GRU. They demonstrate that their method does notsurpass the text-only baseline: NMT with attention.Huang et al. (2016 ) integrate an image into a head of source words sequence. They extract prominentobjects from the image by Region-based Convolutional Neural Networks (R-CNN) ( Girshick ,2015 ).Objects are then converted to feature vectors by VGG-19 ( Simonyan & Zisserman ,2014 ) and areput into a head of source words sequence. They demonstrate that object extraction by R-CNNcontributes greatly to the improvement. This model achieved the highest METEOR score in NMT-based models in WMT16, which we compare to our model in the experiment. We designate thismodel as CMU.Caglayan et al. (2016 ) argue that their proposed model did not achieve improvement because theyfailed to benefit from both text and images. We assume that they failed to integrate text and imagesbecause they simply put images and text into neural machine translation despite huge gap existsbetween image information and text information. Our model, however, presents the possibility ofbenefitting from images and text because text and images are projected to their common semanticspace so that the gap of images and text would be filled.2.3 V ARIATIONAL AUTO ENCODERV AE was proposed in an earlier report of the literature Kingma et al. (2014 );Rezende et al. (2014 ).Given an observed variable x, V AE introduces a continuous latent variable z, with the assump-tion that xis generated from z. V AE incorporates p(xjz)andqφ(zjx)into an end-to-end neuralnetwork. The lower bound is shown below.LVAE =DKL[qφ(zjx)jjp(z)] +Eqφ(zjx)[logp(xjz)]logp(x) (1)3 N EURAL MACHINE TRANSLATION WITHLATENT SEMANTIC OFIMAGEANDTEXTWe propose a neural machine translation model which explicitly has a latent variable containing anunderlying semantic extracted from both text and image. This model can be seen as an extension ofVNMT by adding image information.Our model can be drawn as a graphical model in Figure 3. Its lower bound isL=DKL[qφ(zjx;y;)jjp(zjx)] +Eqφ(zjx;y;)[logp(yjz;x)]; (2)where x;y;;zrespectively denote the source, target, image and latent variable, and pandqφre-spectively denote the prior distribution and the approximate posterior distribution. It is noteworthy inEq. ( 2) that we want to model p(zjx;y;), which is intractable. Therefore we model qφ(zjx;y;)1http://www.statmt.org/wmt16/3Under review as a conference paper at ICLR 2017zx yzx yFigure 2: VNMTzx yzx yFigure 3: Our modelinstead, and also model prior p(zjx)so that we can generate a translation from the source in testing.Derivation of the formula is presented in the appendix.We model all distributions in Eq. ( 2) by neural networks. Our model architecture is divisible intothree parts: 1) encoder, 2) inferrer, and 3) decoder.3.1 E NCODERIn the encoder, the semantic representation heis obtained from the image, source, and target. Wepropose several methods to encode an image. We show how these methods affect the translationresult in the Experiment section. This representation is used in the inferrer. This section links to thegreen part of Figure 1.3.1.1 TEXT ENCODINGThe source and target are encoded in the same way as Bahdanau et al. (2015 ). The source is con-verted to a sequence of 1-of-k vector and is embedded to dembdimensions. We designate it as thesource sequence. Then, a source sequence is put into bidirectional RNN. Representation hiis ob-tained by concatenating ⃗hiand ⃗hi:⃗hi= RNN( ⃗hi1; Ewi);⃗hi= RNN( ⃗hi+1; Ewi);hi= [⃗hi;⃗hi],where Ewiis the embedded word in a source sentence, hi2Rdh, and ⃗hi;⃗hi2Rdh2. It is conductedthrough i= 0 toi=Tf, where Tfis the sequence length. GRU is implemented in bidirectionalRNN so that it can attain long-term dependence. Finally, we conduct mean-pooling to hiand obtainthe source representation vector as hf=1Tf∑Tfihi. The exact same process is applied to target toobtain target representation hg.3.1.2 IMAGE ENCODING AND SEMANTIC REPRESENTATIONWe use Convolutional Neural Networks (CNN) to extract feature vectors from images. We proposeseveral ways of extracting image features.Global (G) The image feature vector is extracted from the image using a CNN. With this method,we use a feature vector in the certain layer as . Then is encoded to the image represen-tation vector hsimply by affine transformation ash=W+bwhere W2Rddfc7; b2Rd: (3)Global and Objects (G+O) First we extract some prominent objects from images in some way.Then, we obtain fc7 image feature vectors from the original image and extracted objectsusing a CNN. Therefore takes a variable length. We handle in two ways: average andRNN encoder.In average ( G+O-A VG ), we first obtain intermediate image representation vector h′byaffine transformation in Eq. ( 3). Then, the average of h′becomes the image representationvector: h=∑lih′il, where lis the length of h′.4Under review as a conference paper at ICLR 2017In RNN encoder ( G+O-RNN ), we first obtain h′by affine transformation in Eq. ( 3). Then,we encode h′in the same way as we encode text in Section 3.1.1 to obtain h.Global and Objects into source and target (G+O-TXT) Thereby, we first obtain h′by affinetransformation in Eq. ( 3). Then, we put sequential vector h′into the head of the sourcesequence and target sequence. In this case, we set dto be the same dimension as demb. Infact, the source sequence including h′is only used to model qφ(zjx;y;). Context vectorc(Eq. ( 15)) and p(zjx)are computed by a source sequence that does not include h′. Weencode the source sequence including h′as Section 3.1.1 to obtain hfandhg. In this case,his not obtained. Image information is contained in hfandhg.All representation vectors hf,hgandhare concatenated to obtain a semantic representation vectorashe= [hf;hg;h], where he2Rde=2dh+d(in G+O-TXT: he= [hf;hg], where he2Rde=2dh). It is an input of the multimodal variational neural inferrer.3.2 I NFERRERWe model the posterior qφ(zjx;y;)using a neural network and also the prior p(zjx)by neuralnetwork. This section links to the black and grey part of Figure 1.3.2.1 N EURAL POSTERIOR APPROXIMATORModeling the true posterior p(zjx;y;)is usually intractable. Therefore, we consider model-ing of an approximate posterior qφ(zjx;y;)by introducing V AE. We assume that the posteriorqφ(zjx;y;)has the following form:qφ(zjx;y;) =N(z;(x;y;);(x;y;)2I): (4)The mean and standard deviation of the approximate posterior are the outputs of neural net-works.Starting from the variational neural encoder, a semantic representation vector heis projected tolatent semantic space ashz=g(W(1)zhe+b(1)z); (5)where W(1)z2Rdz(de)b(1)z2Rdz.g()is an element-wise activation function, which we set astanh( ). Gaussian parameters of Eq. ( 4) are obtained through linear regression as=Whz+b;log2=Whz+b; (6)where ;log22Rdz.3.2.2 N EURAL PRIOR MODELWe model the prior distribution p(zjx)as follows:p(zjx) = N(z;′(x);′(x)2I):(7)′and′are generated in the same way as that presented in Section 3.2.1 , except for the absenceofyandas inputs. Because of the absence of representation vectors, the dimensions of weight inequation ( 5) for prior model are W′(1)z2Rdzdh;b′(1)z2Rdz. We use a reparameterization trickto obtain a representation of latent variable z:h′z=+ε,ε N (0; I). During translation, h′zisset as the mean of p(zjx). Then, h′zis projected onto the target space ash′e=g(W(2)zh′z+b(2)z)where h′e2Rde: (8)h′eis then integrated into the neural machine translation’s decoder.5Under review as a conference paper at ICLR 20173.3 D ECODERThis section links to the orange part of Figure 1. Given the source sentence xand the latent variablez, decoder defines the probability over translation yasp(yjz;x) =T∏j=1p(yjjy<j;z;x): (9)How we define the probability over translation yis fundamentally the same as VNMT, except forusing conditional GRU instead of GRU. Conditional GRU involves two GRUs and an attentionmechanism. We integrate a latent variable zinto the second GRU. We describe it in the appendix.3.4 M ODEL TRAININGMonte Carlo sampling method is used to approximate the expectation over the posterior Eq. ( 2),Eqφ(zjx;y;)1L∑Ll=1logp(yjx;h(l)z), where Lis the number of samplings. The training objec-tive is defined asL(; φ) =DKL[qφ(zjx;y;)jjp(zjx)] +1LL∑l=1T∑j=1logp(yjjy<j;x;h(l)z); (10)where hz=+ε,ε N (0; I). The first term, KL divergence, can be computed analyticallyand is differentiable because both distributions are Gaussian. The second term is also differentiable.We set Las 1. Overall, the objective Lis differentiable. Therefore, we can optimize the parameterand variational parameter φusing gradient ascent techniques.4 E XPERIMENTS4.1 E XPERIMENTAL SETUPWe used Multi30k ( Elliott et al. ,2016 ) as the dataset. Multi30k have an English description and aGerman description for each corresponding image. We handle 29,000 pairs as training data, 1,014pairs as validation data, and 1,000 pairs as test data.Before training, punctuation normalization and lowercase are applied to both English and Germansentences by Moses ( Koehn et al. ,2007 ) scripts2. Compound-word splitting is conducted only toGerman sentences using Sennrich et al. (2016 )3. Then we tokenize sentences2and use them astraining data. We produce vocabulary dictionaries from training data. The vocabulary becomes10,211 words for English and 13,180 words for German after compound-word splitting.Image features are extracted using VGG-19 CNN ( Simonyan & Zisserman ,2014 ). We use 4096-dimensional fc7 features. To extract the object’s region, we use Fast R-CNN ( Girshick ,2015 ). FastR-CNN is trained on ImageNet and MSCOCO dataset4.All weights are initialized by N(0;0:01I). We use the adadelta algorithm as an optimization method.The hyperparameters used in the experiment are presented in the Appendix. All models are trainedwith early stopping. When training, VNMT is fine-tuned by NMT model and our models are fine-tuned using VNMT. When translating, we use beam-search. The beam-size is set as 12. Beforeevaluation, we restore split words to the original state and de-tokenize2generated sentences.We implemented proposed models based on dl4mt5. Actually, dl4mt is fundamentally the samemodel as Bahdanau et al. (2015 ), except that its decoder employs conditional GRU6. We imple-mented VNMT also with conditional GRU so small difference exists between our implementation2https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/ fnormalize-punctuation, low-ercase, tokenizer, detokenizer g.perl3https://github.com/rsennrich/subword-nmt4https://github.com/rbgirshick/fast-rcnn/tree/coco5https://github.com/nyu-dl/dl4mt-tutorial6The architecture is described at https://github.com/nyu-dl/dl4mt-tutorial/blob/master/docs/cgru.pdf6Under review as a conference paper at ICLR 2017and originally proposed VNMT which employs normal GRU as a decoder. We evaluated resultsbased on METEOR and BLUE using MultEval7.4.2 R ESULTTable 1presents experiment results. It shows that our models outperforms the baseline in bothMETEOR and BLEU. Figure 4shows the plot of METEOR score of baselines and our modelsmodels in validation. Figure 5shows the plot of METEOR score and the source sentence length.Table 1: Evaluation Result on Multi30k dataset (English–German). The scores in parentheses arecomputed with ‘-norm’ parameter. NMT is dl4mt ’s NMT (in the session3 directory). The score ofthe CMU is from ( Huang et al. ,2016 ).METEOR " BLEU "val test val testNMT 51.5 (55.8) 50.5 (54.9) 35.8 33.1VNMT 52.2 (56.3) 51.1 (55.3) 37.0 34.9CMU - (-) - (54.1) - -Our Model G 50.6 (54.8) 52.4 (56.0) 34.5 36.5G+O-A VG 51.8 (55.8) 51.8 (55.8) 35.7 35.8G+O-RNN 51.8 (56.1) 51.0 (55.4) 35.9 34.9G+O-TXT 52.6 (56.8) 51.7 (56.0) 36.6 35.14.3 Q UANTITATIVE ANALYSISTable 1shows that G scores the best in proposed models. In G, we simply put the feature of theoriginal image. Actually, proposed model does not benefit from R-CNN, presumably because wecan not handle sequences of image features very well. For example, G+O-A VG uses the average ofmultiple image features, but it only makes the original image information unnecessarily confusing.Figure 4shows that G and G+O-A VG outperforms VNMT almost every time, but all model scoresincrease suddenly in the 17,000 iteration validation. We have no explanation for this behavior.Figure 4also shows that G and G+O-A VG scores fluctuate more moderately than others. We statethat G and G+O-A VG gain stability by adding image information. When one observes the differencebetween the test score and the validation score for each model, baseline scores decrease more thanproposed model scores. Especially, the G score increases in the test, simply because proposedmodels produce a better METEOR score on average, as shown in Figure 4.Figure 5shows that G and G+O-A VG make more improvements on baselines in short sentencesthan in long sentences, presumably because qφ(zjx;y;)can model zwell when a sentence isshort. Image features always have the same dimension, but underlying semantics of the image andtext differ. We infer that when the sentence is short, image feature representation can afford toapproximate the underlying semantic, but when a sentence is long, image feature representation cannot approximate the underlying semantic.Multi30k easily becomes overfitted, as shown in Figure 8and9in the appendix. This is presumablybecause 1) Multi30k is the descriptions of image, making the sentences short and simple, and 2)Multi30k has 29,000 sentences, which could be insufficient. In the appendix, we show how theparameter setting affects the score. One can see that decay-c has a strong effect. Huang et al.(2016 ) states that their proposed model outperforms the baseline (NMT), but we do not have thatobservation. It can be assumed that their baseline parameters are not well tuned.4.4 Q UALITATIVE ANALYSISWe presented the top 30 sentences, which make the largest METEOR score difference between Gand VNMT, to native speakers of German and get the overall comments. They were not informed of7https://github.com/jhclark/multeval, we use meteor1.5 instead of meteor1.4, which is the default ofMultEval .7Under review as a conference paper at ICLR 20170 5 10 15 20 25 30iteration (x 1000)4045505560METEORValidation METEOR ScorenmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 4: METEOR score to the validationdata which are calculated for each 1000 itera-tions.10 15 20 25 30Source Sentence Word Length3540455055METEORTest METEOR score w.r.t. source word lengthnmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 5: METEOR score on different groupsof the source sentence length.our model training with image in addition to text. These comments are summarized into two generalremarks. One is that G translates the meaning of the source material more accurately than VNMT.The other is that our model has more grammatical errors as prepositions’ mistakes or missing verbscompared to VNMT. We assume these two remarks are reasonable because G is trained with imageswhich mainly have a representation of noun rather than verb, therefore can capture the meaning ofmaterials in sentence.Figure 6presents the translation results and the corresponding image which G translates more ac-curately than VNMT in METEOR. Figure 7presents the translation results and the correspondingimage which G translates less accurately than VNMT in METEOR. Again, we note that our modeldoes not use image during translating. In Figure 6, G translates ”a white and black dog” correctlywhile VNMT translates it incorrectly implying ”a white dog and a black dog”. We assume that Gcorrectly translates the source because G captures the meaning of material in the source. In Figure7, G incorrectly translates the source. Its translation result is missing the preposition meaning ”at”,which is hardly represented in image.We present more translation examples in appendix.Source a woman holding a white and black dog.Truth eine frau h ̈alt einen weiß-schwarzen hund.VNMT eine frau h ̈alt einen weißen und schwarzen hund.Our Model (G) eine frau h ̈alt einen weiß-schwarzen hund.Figure 6: Translation 18Under review as a conference paper at ICLR 2017Source a group of people running a marathon in the winter.Truth eine gruppe von menschen l ̈auft bei einem marathon im winter.VNMT eine gruppe von menschen l ̈auft bei einem marathon im winter.Our Model (G) eine gruppe leute l ̈auft einen marathon im winter an.Figure 7: Translation 25 C ONCLUSIONAs described herein, we proposed the neural machine translation model that explicitly has a latentvariable that includes underlying semantics extracted from both text and images. Our model outper-forms the baseline in both METEOR and BLEU scores. Experiments and analysis present that ourmodel can generate more accurate translation for short sentences. In qualitative analysis, we presentthat our model can translate nouns accurately while our model make grammatical errors.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In ICLR , 2015.Lawrence W. Barsalou. Perceptual symbol Systems. Behavioral and Brain Sciences , 22:577–609,1999.Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garc ́ıa-Mart ́ınez, FethiBougares, Lo ̈ıc Barrault, and Joost van de Weijer. Does Multimodality Help Human and Ma-chine for Translation and Image Captioning? In WMT , 2016.Iacer Calixto, Desmond Elliott, and Stella Frank. DCU-UvA Multimodal MT System Report. InProceedings of the First Conference on Machine Translation , pp. 634–638. Association for Com-putational Linguistics, 2016.Michael Denkowski and Alon Lavie. Meteor Universal: Language Specific Translation Evaluationfor Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical MachineTranslation , 2014.D. Elliott, S. Frank, and E. Hasler. Multilingual Image Description with Neural Sequence Models.ArXiv e-prints , 2015.Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. Multi30K: Multilingual English-German Image Descriptions. CoRR , abs/1605.00459, 2016.Ross Girshick. Fast R-CNN. In ICCV , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for ImageRecognition. In CVPR , 2016.9Under review as a conference paper at ICLR 2017Julian Hitschler and Stefan Riezler. Multimodal Pivots for Image Caption Translation. arXiv preprintarXiv:1601.03916 , 2016.Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. Attention-based Multi-modal Neural Machine Translation. In WMT , 2016.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedLearning with Deep Generative Models. In NIPS , 2014.Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, NicolaBertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond ˇrej Bojar,Alexandra Constantin, and Evan Herbst. Moses: Open Source Toolkit for Statistical MachineTranslation. In ACL, 2007.Jindˇrich Libovick ́y, Jind ˇrich Helcl, Marek Tlust ́y, Ond ˇrej Bojar, and Pavel Pecina. CUNI System forWMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the FirstConference on Machine Translation , pp. 646–654. Association for Computational Linguistics,2016.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A Method for AutomaticEvaluation of Machine Translation. In ACL, 2002.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative Adversarial Text to Image Synthesis. In ICML , 2016.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approx-imate Inference in Deep Generative Models. In ICML , 2014.Sergio Rodr ́ıguez Guasch and Marta R. Costa-juss `a. WMT 2016 Multimodal Translation SystemDescription based on Bidirectional Recurrent Neural Networks with Double-Embeddings. InProceedings of the First Conference on Machine Translation , pp. 655–659. Association for Com-putational Linguistics, 2016.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Wordswith Subword Units. In ACL, 2016.Kashif Shah, Josiah Wang, and Lucia Specia. SHEF-Multimodal: Grounding Machine Transla-tion on Images. In Proceedings of the First Conference on Machine Translation , pp. 660–665.Association for Computational Linguistics, 2016.Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale ImageRecognition. CoRR , abs/1409.1556, 2014.Lucia Specia, Stella Frank, Khalil Sima ʟan, and Desmond Elliott. A shared Task on MultimodalMachine Translation and Crosslingual Image Description. In Proceedings of the First Conferenceon Machine Translation, Berlin, Germany. Association for Computational Linguistics , 2016.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net-works. In NIPS , 2014.Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for NeuralMachine Translation. In ACL, 2016.Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov,Richard S Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Gener-ation with Visual Attention. In CVPR , 2015.Biao Zhang, Deyi Xiong, and Jinsong Su. Variational Neural Machine Translation. In EMNLP ,2016.10Under review as a conference paper at ICLR 2017A D ERIVATION OF LOWER BOUNDSThe lower bound of our model can be derived as follows:p(yjx) =∫p(y;zjx)dz=∫p(zjx)p(yjz;x)dzlogp(yjx) = log∫q(zjx;y;)p(zjx)p(yjz;x)q(zjx;y;)dz∫q(zjx;y;) logp(zjx)p(yjz;x)q(zjx;y;)dz=∫q(zjx;y;)(logp(zjx)q(zjx;y)+ log p(yjz;x))dz=DKL[q(zjx;y;)jjp(zjx)] +Eq(zjx;y;)[logp(yjz;x)]=LB C ONDITIONAL GRUConditional GRU is implemented in dl4mt .Caglayan et al. (2016 ) extends Conditional GRU tomake it capable of receiving image information as input. The first GRU computes intermediaterepresentation s′jass′j= (1o′j)⊙s′j+o′j⊙sj1 (11)s′j= tanh( W′E[yj1] +r′j⊙(U′sj1)) (12)r′j=(W′rE[yj1] +U′rsj1) (13)o′j=(W′oE[yj1] +U′osj1) (14)where E2Rdembdtsignifies the target word embedding, s′j2Rdhdenotes the hidden state,r′j2Rdhando′j2Rdhrespectively represent the reset and update gate activations. dtstands for thedimension of target; the unique number of target words. [W′; W′r; W′o]2Rdhdemb;[U′; U′r; U′o]2Rdhdhare the parameters to be learned.Context vector cjis obtained ascj= tanh0@Tf∑i=1ijhi1A (15)ij=exp(eij)∑Tfk=1exp(ekj)(16)eij=Uatttanh( Wcatthi+Watts′j) (17)where [Uatt; Wcatt; Watt]2Rdhdhare the parameters to be learned.The second GRU computes sjfrom s′j,cjandh′eassj= (1o′j)⊙sj+oj⊙s′j (18)sj= tanh( Wcj+rj⊙(Us′j) +Vh′e) (19)rj=(Wrcj+Urs′j+Vrh′e) (20)oj=(Wocj+Uos′j+Voh′e) (21)where sj2Rdhstands for the hidden state, rj2Rdhandoj2Rdhare the reset and updategate activations. [W; W r; Wo]2Rdhdh;[U; U r; Uo]2Rdhdh;[V; V r; Vo]2Rdhdzare the11Under review as a conference paper at ICLR 2017parameters to be learned. We introduce h′eobtained from a latent variable here so that a latentvariable can affect the representation sjthrough GRU units.Finally, the probability of yis computed asuj=Lutanh( E[yj1] +Lssj+Lxcj) (22)P(yjjyj1;sj;cj) = Softmax( uj) (23)where Lu2Rdtdemb,Ls2RdembdhandLc2Rdembdhare the parameters to be learned.C T RAINING DETAILC.1 H YPERPARAMETERSTable 2presents parameters that we use in the experiments.Table 2: Hyperparameters. The name is the variable name of dl4mt except for dimv anddimpic,which are the dimension of the latent variables and image embeddings. We set dim(number ofLSTM unit size) and dimword (dimensions of word embeddings) 256, batchsize 32,maxlen (maxoutput length) 50 and lr(learning rate) 1.0 for all models. decay-c is weights on L2 regularization.dimv dimpic decay-cNMT - 256 0.001VNMT 256 256 0.0005Our Model G 256 512 0.001G+O-A VG 256 256 0.0005G+O-RNN 256 256 0.0005G+O-TXT 256 256 0.0005We found that Multi30k dataset is easy to overfit. Figure 8and Figure 9present training cost and val-idation METEOR score graph of the two experimental settings of the NMT model. Table 3presentsthe hyperparameters which were used in the experiments. Large decay-c ans small batchsize give thebetter METEOR scores in the end. Training is stopped if there is no validation cost improvementsover the last 10 validations.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costTraining cost12Figure 8: NMT Training Cost0 10 20 30 40 50 60 70iteration (x 1000)0102030405060METEORValidation METEOR12 Figure 9: NMT Validation METEOR scoreTable 3: Hyperparameters using the experiments in the Figure 8and9dim dim word lr decay-c maxlen batchsie1256 256 1.0 0.0005 30 1282256 256 1.0 0.001 50 3212Under review as a conference paper at ICLR 2017Figure 10presents the English word length histogram of the Multi30k test dataset. Most sentencesin the Multi30k are less than 20 words. We assume that this is one of the reasons why Multi30k iseasy to overfit.0 5 10 15 20 25 30 35Source Sentence Word Length0100200300400500600NumberFigure 10: Word Length Histogram of the Multi30k Test DatasetC.2 COST GRAPHFigure 11and12present the training cost and validation cost graph of each models. Please note thatVNMT fine-tuned NMT, and other models fine-tuned VNMT.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costNMTcost(a) NMT0 5000 10000 15000 20000 25000 30000iteration020406080100klcostVNMTklcostcost (b) VNMT0 5000 10000 15000 20000 25000iteration020406080100klcostGklcostcost (c) G0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-AVGklcostcost(d) G+O-A VG0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-RNNklcostcost (e) G+O-RNN0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-TXTklcostcost (f) G+O-TXTFigure 11: Training costC.3 T RANSLATION EXAMPLESWe present some selected translations from VNMT and our proposed model (G). As of translation3 to 5 our model give the better METEOR scores than VNMT and as of translation 6 to 8 VNMTgive the better METEOR scores than our models.13Under review as a conference paper at ICLR 20170 10 20 30 40 50 60 70iteration (x 1000) 01020304050607080costNMTcost(a) NMT0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costVNMTcost (b) VNMT0 5 10 15 20 25iteration (x 1000) 01020304050607080costGcost (c) G0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-AVGcost(d) G+O-A VG0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-RNNcost (e) G+O-RNN0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-TXTcost (f) G+O-TXTFigure 12: Validation costSource two boys inside a fence jump in the air while holding a basketball.Truth zwei jungen innerhalb eines zaunes springen in die luft und halten dabei einen basketball.VNMT zwei jungen in einem zaun springen in die luft, w ̈ahrend sie einen basketball h ̈alt.Our Model (G) zwei jungen in einem zaun springen in die luft und halten dabei einen basketball.Figure 13: Translation 314Under review as a conference paper at ICLR 2017Source a dog runs through the grass towards the camera.Truth ein hund rennt durch das gras auf die kamera zu.VNMT ein hund rennt durch das gras in die kamera.Our Model (G) ein hund rennt durch das gras auf die kamera zu.Figure 14: Translation 4Source a couple of men walking on a public city street.Truth einige m ̈anner gehen auf einer ̈offentlichen straße in der stadt.VNMT ein paar m ̈anner gehen auf einer ̈offentlichen stadtstraße.Our Model (G) ein paar m ̈anner gehen auf einer ̈offentlichen straße in der stadt.Figure 15: Translation 515Under review as a conference paper at ICLR 2017Source a bunch of police officers are standing outside a bus.Truth eine gruppe von polizisten steht vor einem bus.VNMT eine gruppe von polizisten steht vor einem bus.Our Model (G) mehrere polizisten stehen vor einem bus.Figure 16: Translation 6Source a man is walking down the sidewalk next to a street.Truth ein mann geht neben einer straße den gehweg entlang.VNMT ein mann geht neben einer straße den b ̈urgersteig entlang.Our Model (G) ein mann geht auf dem b ̈urgersteig an einer straße.Figure 17: Translation 716Under review as a conference paper at ICLR 2017Source a blond-haired woman wearing a blue shirt unwraps a hat.Truth eine blonde frau in einem blauen t-shirt packt eine m ̈utze aus.VNMT eine blonde frau in einem blauen t-shirt wirft einen hut.Our Model (G) eine blonde frau tr ̈agt ein blaues hemd und einen hut.Figure 18: Translation 817
SJNO3xMVg
B1G9tvcgx
ICLR.cc/2017/conference/-/paper412/official/review
{"title": "Promising research direction but not quite there", "rating": "3: Clear rejection", "review": "This paper proposes a multimodal neural machine translation that is based upon previous work using variational methods but attempts to ground semantics with images. Considering way to improve translation with visual information seems like a sensible thing to do when such data is available. \n\nAs pointed out by a previous reviewer, it is not actually correct to do model selection in the way it was done in the paper. This makes the gains reported by the authors very marginal. In addition, as the author's also said in their question response, it is not clear if the model is really learning to capture useful image semantics. As such, it is unfortunately hard to conclude that this paper contributes to the direction that originally motivated it.\n\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neural Machine Translation with Latent Semantic of Image and Text
["Joji Toyama", "Masanori Misono", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo"]
Although attention-based Neural Machine Translation have achieved great success, attention-mechanism cannot capture the entire meaning of the source sentence because the attention mechanism generates a target word depending heavily on the relevant parts of the source sentence. The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation. We follow this approach and we believe that the capturing meaning of sentence benefits from image information because human beings understand the meaning of language not only from textual information but also from perceptual information such as that gained from vision. As described herein, we propose a neural machine translation model that introduces a continuous latent variable containing an underlying semantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conducted with an English–German translation task show that our model outperforms over the baseline.
["neural machine translation", "latent semantic", "image", "entire meaning", "source sentence", "meaning", "image information", "model", "text"]
https://openreview.net/forum?id=B1G9tvcgx
https://openreview.net/pdf?id=B1G9tvcgx
https://openreview.net/forum?id=B1G9tvcgx&noteId=SJNO3xMVg
Under review as a conference paper at ICLR 2017NEURAL MACHINE TRANSLATION WITH LATENT SE-MANTIC OF IMAGE AND TEXTJoji Toyama, Masanori Misonoy, Masahiro Suzuki, Kotaro Nakayama & Yutaka MatsuoGraduate School of Engineering,yGraduate School of Information Science and TechnologyThe University of TokyoHongo, Tokyo, Japanftoyama,misono,masa,k-nakayama,matsuo g@weblab.t.u-tokyo.ac.jpABSTRACTAlthough attention-based Neural Machine Translation have achieved great suc-cess, attention-mechanism cannot capture the entire meaning of the source sen-tence because the attention mechanism generates a target word depending heavilyon the relevant parts of the source sentence. The report of earlier studies has in-troduced a latent variable to capture the entire meaning of sentence and achievedimprovement on attention-based Neural Machine Translation. We follow this ap-proach and we believe that the capturing meaning of sentence benefits from im-age information because human beings understand the meaning of language notonly from textual information but also from perceptual information such as thatgained from vision. As described herein, we propose a neural machine transla-tion model that introduces a continuous latent variable containing an underlyingsemantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conductedwith an English–German translation task show that our model outperforms overthe baseline.1 I NTRODUCTIONNeural machine translation (NMT) has achieved great success in recent years ( Sutskever et al. ,2014 ;Bahdanau et al. ,2015 ). In contrast to statistical machine translation, which requires huge phrase andrule tables, NMT requires much less memory. However, the most standard model, NMT with at-tention ( Bahdanau et al. ,2015 ) entails the shortcoming that the attention mechanism cannot capturethe entire meaning of a sentence because it generates a target word while depending heavily onthe relevant parts of the source sentence ( Tu et al. ,2016 ). To overcome this problem, VariationalNeural Machine Translation (VNMT), which outperforms NMT with attention introduces a latentvariable to capture the underlying semantic from source and target ( Zhang et al. ,2016 ). We followthe motivation of VNMT, which is to capture underlying semantic of a source.Image information is related to language. For example, we human beings understand the meaningof language by linking perceptual information given by the surrounding environment and language(Barsalou ,1999 ). Although it is natural and easy for humans, it is difficult for computers to un-derstand different domain’s information integrally. Solving this difficult task might, however, bringgreat improvements in natural language processing. Several researchers have attempted to link lan-guage and images such as image captioning by Xu et al. (2015 ) or image generation from sentencesbyReed et al. (2016 ). They described the possibility of integral understanding of images and text. Inmachine translation, we can expect an improvement using not only text information but also imageinformation because image information can bridge two languages.As described herein, we propose the neural machine translation model which introduces a latentvariable containing an underlying semantic extracted from texts and images. Our model includes anexplicit latent variable z, which has underlying semantics extracted from text and images by intro-ducing a Variational Autoencoder (V AE) ( Kingma et al. ,2014 ;Rezende et al. ,2014 ). Our model,First two authors contributed equally.1Under review as a conference paper at ICLR 2017h"h"h#h$h%h#h$h%h"&h"&h#&h$&h#&h$&'(#("($(%)#)")$h*&h*+log/#010"0#0$)#)")$h2&h2 Figure 1: Architecture of Proposed Model.Green dotted lines denote that and encoded yare used only when training.which can be trained end-to-end, requires image information only when training. As describedherein, we tackle the task with which one uses a parallel corpus and images in training, while usinga source corpus in translating. It is important to define the task in this manner because we rarelyhave a corresponding image when we want to translate a sentence. During translation, our modelgenerates a semantic variable zfrom a source, integrates variable zinto a decoder of neural machinetranslation system, and then finally generates the translation. The difference between our model andVNMT is that we use image information in addition to text information.For experiments, we used Multi30k ( Elliott et al. ,2016 ), which includes images and the correspond-ing parallel corpora of English and German. Our model outperforms the baseline with two evaluationmetrics: METEOR ( Denkowski & Lavie ,2014 ) and BLEU ( Papineni et al. ,2002 ). Moreover, weobtain some knowledge related to our model and Multi30k. Finally, we present some examples inwhich our model either improved, or worsened, the result.Our paper contributes to the neural machine translation research community in three ways.We present the first neural machine translation model to introduce a latent variable inferredfrom image and text information. We also present the first translation task with which oneuses a parallel corpus and images in training, while using a source corpus in translating.Our translation model can generate more accurate translation by training with images, es-pecially for short sentences.We present how the translation of source is changed by adding image information comparedto VNMT which does not use image information.2 B ACKGROUNDOur model is the extension of Variational Neural Machine Translation (VNMT) ( Zhang et al. ,2016 ).Our model is also viewed as one of the multimodal translation models. In our model, V AE is usedto introduce a latent variable. We describe the background of our model in this section.2.1 V ARIATIONAL NEURAL MACHINE TRANSLATIONThe VNMT translation model introduces a latent variable. This model’s architecture shown in Figure1excludes the arrow from . This model involves three parts: encoder, inferrer, and decoder. Inthe encoder, both the source and target are encoded by bidirectional-Recurrent Neural Networks(bidirectional-RNN) and a semantic representation is generated. In the inferrer, a latent variable zis2Under review as a conference paper at ICLR 2017modeled from a semantic representation by introducing V AE. In the decoder, a latent variable zisintegrated in the Gated Recurrent Unit (GRU) decoder; also, a translation is generated.Our model is followed by architecture, except that the image is also encoded to obtain a latentvariable z.2.2 M ULTIMODAL TRANSLATIONMultimodal Translation is the task with which one might one can use a parallel corpus and images.The first papers to study multimodal translation are Elliott et al. (2015 ) and Hitschler & Riezler(2016 ). It was selected as a shared task in Workshop of Machine Translation 2016 (WMT161). Al-though several studies have been conducted ( Caglayan et al. ,2016 ;Huang et al. ,2016 ;Calixto et al. ,2016 ;Libovick ́y et al. ,2016 ;Rodr ́ıguez Guasch & Costa-juss `a,2016 ;Shah et al. ,2016 ), they do notshow great improvement, especially in neural machine translation ( Specia et al. ,2016 ). Here, we in-troduce end-to-end neural network translation models like our model.Caglayan et al. (2016 ) integrate an image into an NMT decoder. They simply put source contextvectors and image feature vectors extracted from ResNet-50’s ‘res4f relu’ layer ( He et al. ,2016 )into the decoder called multimodal conditional GRU. They demonstrate that their method does notsurpass the text-only baseline: NMT with attention.Huang et al. (2016 ) integrate an image into a head of source words sequence. They extract prominentobjects from the image by Region-based Convolutional Neural Networks (R-CNN) ( Girshick ,2015 ).Objects are then converted to feature vectors by VGG-19 ( Simonyan & Zisserman ,2014 ) and areput into a head of source words sequence. They demonstrate that object extraction by R-CNNcontributes greatly to the improvement. This model achieved the highest METEOR score in NMT-based models in WMT16, which we compare to our model in the experiment. We designate thismodel as CMU.Caglayan et al. (2016 ) argue that their proposed model did not achieve improvement because theyfailed to benefit from both text and images. We assume that they failed to integrate text and imagesbecause they simply put images and text into neural machine translation despite huge gap existsbetween image information and text information. Our model, however, presents the possibility ofbenefitting from images and text because text and images are projected to their common semanticspace so that the gap of images and text would be filled.2.3 V ARIATIONAL AUTO ENCODERV AE was proposed in an earlier report of the literature Kingma et al. (2014 );Rezende et al. (2014 ).Given an observed variable x, V AE introduces a continuous latent variable z, with the assump-tion that xis generated from z. V AE incorporates p(xjz)andqφ(zjx)into an end-to-end neuralnetwork. The lower bound is shown below.LVAE =DKL[qφ(zjx)jjp(z)] +Eqφ(zjx)[logp(xjz)]logp(x) (1)3 N EURAL MACHINE TRANSLATION WITHLATENT SEMANTIC OFIMAGEANDTEXTWe propose a neural machine translation model which explicitly has a latent variable containing anunderlying semantic extracted from both text and image. This model can be seen as an extension ofVNMT by adding image information.Our model can be drawn as a graphical model in Figure 3. Its lower bound isL=DKL[qφ(zjx;y;)jjp(zjx)] +Eqφ(zjx;y;)[logp(yjz;x)]; (2)where x;y;;zrespectively denote the source, target, image and latent variable, and pandqφre-spectively denote the prior distribution and the approximate posterior distribution. It is noteworthy inEq. ( 2) that we want to model p(zjx;y;), which is intractable. Therefore we model qφ(zjx;y;)1http://www.statmt.org/wmt16/3Under review as a conference paper at ICLR 2017zx yzx yFigure 2: VNMTzx yzx yFigure 3: Our modelinstead, and also model prior p(zjx)so that we can generate a translation from the source in testing.Derivation of the formula is presented in the appendix.We model all distributions in Eq. ( 2) by neural networks. Our model architecture is divisible intothree parts: 1) encoder, 2) inferrer, and 3) decoder.3.1 E NCODERIn the encoder, the semantic representation heis obtained from the image, source, and target. Wepropose several methods to encode an image. We show how these methods affect the translationresult in the Experiment section. This representation is used in the inferrer. This section links to thegreen part of Figure 1.3.1.1 TEXT ENCODINGThe source and target are encoded in the same way as Bahdanau et al. (2015 ). The source is con-verted to a sequence of 1-of-k vector and is embedded to dembdimensions. We designate it as thesource sequence. Then, a source sequence is put into bidirectional RNN. Representation hiis ob-tained by concatenating ⃗hiand ⃗hi:⃗hi= RNN( ⃗hi1; Ewi);⃗hi= RNN( ⃗hi+1; Ewi);hi= [⃗hi;⃗hi],where Ewiis the embedded word in a source sentence, hi2Rdh, and ⃗hi;⃗hi2Rdh2. It is conductedthrough i= 0 toi=Tf, where Tfis the sequence length. GRU is implemented in bidirectionalRNN so that it can attain long-term dependence. Finally, we conduct mean-pooling to hiand obtainthe source representation vector as hf=1Tf∑Tfihi. The exact same process is applied to target toobtain target representation hg.3.1.2 IMAGE ENCODING AND SEMANTIC REPRESENTATIONWe use Convolutional Neural Networks (CNN) to extract feature vectors from images. We proposeseveral ways of extracting image features.Global (G) The image feature vector is extracted from the image using a CNN. With this method,we use a feature vector in the certain layer as . Then is encoded to the image represen-tation vector hsimply by affine transformation ash=W+bwhere W2Rddfc7; b2Rd: (3)Global and Objects (G+O) First we extract some prominent objects from images in some way.Then, we obtain fc7 image feature vectors from the original image and extracted objectsusing a CNN. Therefore takes a variable length. We handle in two ways: average andRNN encoder.In average ( G+O-A VG ), we first obtain intermediate image representation vector h′byaffine transformation in Eq. ( 3). Then, the average of h′becomes the image representationvector: h=∑lih′il, where lis the length of h′.4Under review as a conference paper at ICLR 2017In RNN encoder ( G+O-RNN ), we first obtain h′by affine transformation in Eq. ( 3). Then,we encode h′in the same way as we encode text in Section 3.1.1 to obtain h.Global and Objects into source and target (G+O-TXT) Thereby, we first obtain h′by affinetransformation in Eq. ( 3). Then, we put sequential vector h′into the head of the sourcesequence and target sequence. In this case, we set dto be the same dimension as demb. Infact, the source sequence including h′is only used to model qφ(zjx;y;). Context vectorc(Eq. ( 15)) and p(zjx)are computed by a source sequence that does not include h′. Weencode the source sequence including h′as Section 3.1.1 to obtain hfandhg. In this case,his not obtained. Image information is contained in hfandhg.All representation vectors hf,hgandhare concatenated to obtain a semantic representation vectorashe= [hf;hg;h], where he2Rde=2dh+d(in G+O-TXT: he= [hf;hg], where he2Rde=2dh). It is an input of the multimodal variational neural inferrer.3.2 I NFERRERWe model the posterior qφ(zjx;y;)using a neural network and also the prior p(zjx)by neuralnetwork. This section links to the black and grey part of Figure 1.3.2.1 N EURAL POSTERIOR APPROXIMATORModeling the true posterior p(zjx;y;)is usually intractable. Therefore, we consider model-ing of an approximate posterior qφ(zjx;y;)by introducing V AE. We assume that the posteriorqφ(zjx;y;)has the following form:qφ(zjx;y;) =N(z;(x;y;);(x;y;)2I): (4)The mean and standard deviation of the approximate posterior are the outputs of neural net-works.Starting from the variational neural encoder, a semantic representation vector heis projected tolatent semantic space ashz=g(W(1)zhe+b(1)z); (5)where W(1)z2Rdz(de)b(1)z2Rdz.g()is an element-wise activation function, which we set astanh( ). Gaussian parameters of Eq. ( 4) are obtained through linear regression as=Whz+b;log2=Whz+b; (6)where ;log22Rdz.3.2.2 N EURAL PRIOR MODELWe model the prior distribution p(zjx)as follows:p(zjx) = N(z;′(x);′(x)2I):(7)′and′are generated in the same way as that presented in Section 3.2.1 , except for the absenceofyandas inputs. Because of the absence of representation vectors, the dimensions of weight inequation ( 5) for prior model are W′(1)z2Rdzdh;b′(1)z2Rdz. We use a reparameterization trickto obtain a representation of latent variable z:h′z=+ε,ε N (0; I). During translation, h′zisset as the mean of p(zjx). Then, h′zis projected onto the target space ash′e=g(W(2)zh′z+b(2)z)where h′e2Rde: (8)h′eis then integrated into the neural machine translation’s decoder.5Under review as a conference paper at ICLR 20173.3 D ECODERThis section links to the orange part of Figure 1. Given the source sentence xand the latent variablez, decoder defines the probability over translation yasp(yjz;x) =T∏j=1p(yjjy<j;z;x): (9)How we define the probability over translation yis fundamentally the same as VNMT, except forusing conditional GRU instead of GRU. Conditional GRU involves two GRUs and an attentionmechanism. We integrate a latent variable zinto the second GRU. We describe it in the appendix.3.4 M ODEL TRAININGMonte Carlo sampling method is used to approximate the expectation over the posterior Eq. ( 2),Eqφ(zjx;y;)1L∑Ll=1logp(yjx;h(l)z), where Lis the number of samplings. The training objec-tive is defined asL(; φ) =DKL[qφ(zjx;y;)jjp(zjx)] +1LL∑l=1T∑j=1logp(yjjy<j;x;h(l)z); (10)where hz=+ε,ε N (0; I). The first term, KL divergence, can be computed analyticallyand is differentiable because both distributions are Gaussian. The second term is also differentiable.We set Las 1. Overall, the objective Lis differentiable. Therefore, we can optimize the parameterand variational parameter φusing gradient ascent techniques.4 E XPERIMENTS4.1 E XPERIMENTAL SETUPWe used Multi30k ( Elliott et al. ,2016 ) as the dataset. Multi30k have an English description and aGerman description for each corresponding image. We handle 29,000 pairs as training data, 1,014pairs as validation data, and 1,000 pairs as test data.Before training, punctuation normalization and lowercase are applied to both English and Germansentences by Moses ( Koehn et al. ,2007 ) scripts2. Compound-word splitting is conducted only toGerman sentences using Sennrich et al. (2016 )3. Then we tokenize sentences2and use them astraining data. We produce vocabulary dictionaries from training data. The vocabulary becomes10,211 words for English and 13,180 words for German after compound-word splitting.Image features are extracted using VGG-19 CNN ( Simonyan & Zisserman ,2014 ). We use 4096-dimensional fc7 features. To extract the object’s region, we use Fast R-CNN ( Girshick ,2015 ). FastR-CNN is trained on ImageNet and MSCOCO dataset4.All weights are initialized by N(0;0:01I). We use the adadelta algorithm as an optimization method.The hyperparameters used in the experiment are presented in the Appendix. All models are trainedwith early stopping. When training, VNMT is fine-tuned by NMT model and our models are fine-tuned using VNMT. When translating, we use beam-search. The beam-size is set as 12. Beforeevaluation, we restore split words to the original state and de-tokenize2generated sentences.We implemented proposed models based on dl4mt5. Actually, dl4mt is fundamentally the samemodel as Bahdanau et al. (2015 ), except that its decoder employs conditional GRU6. We imple-mented VNMT also with conditional GRU so small difference exists between our implementation2https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/ fnormalize-punctuation, low-ercase, tokenizer, detokenizer g.perl3https://github.com/rsennrich/subword-nmt4https://github.com/rbgirshick/fast-rcnn/tree/coco5https://github.com/nyu-dl/dl4mt-tutorial6The architecture is described at https://github.com/nyu-dl/dl4mt-tutorial/blob/master/docs/cgru.pdf6Under review as a conference paper at ICLR 2017and originally proposed VNMT which employs normal GRU as a decoder. We evaluated resultsbased on METEOR and BLUE using MultEval7.4.2 R ESULTTable 1presents experiment results. It shows that our models outperforms the baseline in bothMETEOR and BLEU. Figure 4shows the plot of METEOR score of baselines and our modelsmodels in validation. Figure 5shows the plot of METEOR score and the source sentence length.Table 1: Evaluation Result on Multi30k dataset (English–German). The scores in parentheses arecomputed with ‘-norm’ parameter. NMT is dl4mt ’s NMT (in the session3 directory). The score ofthe CMU is from ( Huang et al. ,2016 ).METEOR " BLEU "val test val testNMT 51.5 (55.8) 50.5 (54.9) 35.8 33.1VNMT 52.2 (56.3) 51.1 (55.3) 37.0 34.9CMU - (-) - (54.1) - -Our Model G 50.6 (54.8) 52.4 (56.0) 34.5 36.5G+O-A VG 51.8 (55.8) 51.8 (55.8) 35.7 35.8G+O-RNN 51.8 (56.1) 51.0 (55.4) 35.9 34.9G+O-TXT 52.6 (56.8) 51.7 (56.0) 36.6 35.14.3 Q UANTITATIVE ANALYSISTable 1shows that G scores the best in proposed models. In G, we simply put the feature of theoriginal image. Actually, proposed model does not benefit from R-CNN, presumably because wecan not handle sequences of image features very well. For example, G+O-A VG uses the average ofmultiple image features, but it only makes the original image information unnecessarily confusing.Figure 4shows that G and G+O-A VG outperforms VNMT almost every time, but all model scoresincrease suddenly in the 17,000 iteration validation. We have no explanation for this behavior.Figure 4also shows that G and G+O-A VG scores fluctuate more moderately than others. We statethat G and G+O-A VG gain stability by adding image information. When one observes the differencebetween the test score and the validation score for each model, baseline scores decrease more thanproposed model scores. Especially, the G score increases in the test, simply because proposedmodels produce a better METEOR score on average, as shown in Figure 4.Figure 5shows that G and G+O-A VG make more improvements on baselines in short sentencesthan in long sentences, presumably because qφ(zjx;y;)can model zwell when a sentence isshort. Image features always have the same dimension, but underlying semantics of the image andtext differ. We infer that when the sentence is short, image feature representation can afford toapproximate the underlying semantic, but when a sentence is long, image feature representation cannot approximate the underlying semantic.Multi30k easily becomes overfitted, as shown in Figure 8and9in the appendix. This is presumablybecause 1) Multi30k is the descriptions of image, making the sentences short and simple, and 2)Multi30k has 29,000 sentences, which could be insufficient. In the appendix, we show how theparameter setting affects the score. One can see that decay-c has a strong effect. Huang et al.(2016 ) states that their proposed model outperforms the baseline (NMT), but we do not have thatobservation. It can be assumed that their baseline parameters are not well tuned.4.4 Q UALITATIVE ANALYSISWe presented the top 30 sentences, which make the largest METEOR score difference between Gand VNMT, to native speakers of German and get the overall comments. They were not informed of7https://github.com/jhclark/multeval, we use meteor1.5 instead of meteor1.4, which is the default ofMultEval .7Under review as a conference paper at ICLR 20170 5 10 15 20 25 30iteration (x 1000)4045505560METEORValidation METEOR ScorenmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 4: METEOR score to the validationdata which are calculated for each 1000 itera-tions.10 15 20 25 30Source Sentence Word Length3540455055METEORTest METEOR score w.r.t. source word lengthnmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 5: METEOR score on different groupsof the source sentence length.our model training with image in addition to text. These comments are summarized into two generalremarks. One is that G translates the meaning of the source material more accurately than VNMT.The other is that our model has more grammatical errors as prepositions’ mistakes or missing verbscompared to VNMT. We assume these two remarks are reasonable because G is trained with imageswhich mainly have a representation of noun rather than verb, therefore can capture the meaning ofmaterials in sentence.Figure 6presents the translation results and the corresponding image which G translates more ac-curately than VNMT in METEOR. Figure 7presents the translation results and the correspondingimage which G translates less accurately than VNMT in METEOR. Again, we note that our modeldoes not use image during translating. In Figure 6, G translates ”a white and black dog” correctlywhile VNMT translates it incorrectly implying ”a white dog and a black dog”. We assume that Gcorrectly translates the source because G captures the meaning of material in the source. In Figure7, G incorrectly translates the source. Its translation result is missing the preposition meaning ”at”,which is hardly represented in image.We present more translation examples in appendix.Source a woman holding a white and black dog.Truth eine frau h ̈alt einen weiß-schwarzen hund.VNMT eine frau h ̈alt einen weißen und schwarzen hund.Our Model (G) eine frau h ̈alt einen weiß-schwarzen hund.Figure 6: Translation 18Under review as a conference paper at ICLR 2017Source a group of people running a marathon in the winter.Truth eine gruppe von menschen l ̈auft bei einem marathon im winter.VNMT eine gruppe von menschen l ̈auft bei einem marathon im winter.Our Model (G) eine gruppe leute l ̈auft einen marathon im winter an.Figure 7: Translation 25 C ONCLUSIONAs described herein, we proposed the neural machine translation model that explicitly has a latentvariable that includes underlying semantics extracted from both text and images. Our model outper-forms the baseline in both METEOR and BLEU scores. Experiments and analysis present that ourmodel can generate more accurate translation for short sentences. In qualitative analysis, we presentthat our model can translate nouns accurately while our model make grammatical errors.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In ICLR , 2015.Lawrence W. Barsalou. Perceptual symbol Systems. Behavioral and Brain Sciences , 22:577–609,1999.Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garc ́ıa-Mart ́ınez, FethiBougares, Lo ̈ıc Barrault, and Joost van de Weijer. Does Multimodality Help Human and Ma-chine for Translation and Image Captioning? In WMT , 2016.Iacer Calixto, Desmond Elliott, and Stella Frank. DCU-UvA Multimodal MT System Report. InProceedings of the First Conference on Machine Translation , pp. 634–638. Association for Com-putational Linguistics, 2016.Michael Denkowski and Alon Lavie. Meteor Universal: Language Specific Translation Evaluationfor Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical MachineTranslation , 2014.D. Elliott, S. Frank, and E. Hasler. Multilingual Image Description with Neural Sequence Models.ArXiv e-prints , 2015.Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. Multi30K: Multilingual English-German Image Descriptions. CoRR , abs/1605.00459, 2016.Ross Girshick. Fast R-CNN. In ICCV , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for ImageRecognition. In CVPR , 2016.9Under review as a conference paper at ICLR 2017Julian Hitschler and Stefan Riezler. Multimodal Pivots for Image Caption Translation. arXiv preprintarXiv:1601.03916 , 2016.Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. Attention-based Multi-modal Neural Machine Translation. In WMT , 2016.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedLearning with Deep Generative Models. In NIPS , 2014.Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, NicolaBertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond ˇrej Bojar,Alexandra Constantin, and Evan Herbst. Moses: Open Source Toolkit for Statistical MachineTranslation. In ACL, 2007.Jindˇrich Libovick ́y, Jind ˇrich Helcl, Marek Tlust ́y, Ond ˇrej Bojar, and Pavel Pecina. CUNI System forWMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the FirstConference on Machine Translation , pp. 646–654. Association for Computational Linguistics,2016.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A Method for AutomaticEvaluation of Machine Translation. In ACL, 2002.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative Adversarial Text to Image Synthesis. In ICML , 2016.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approx-imate Inference in Deep Generative Models. In ICML , 2014.Sergio Rodr ́ıguez Guasch and Marta R. Costa-juss `a. WMT 2016 Multimodal Translation SystemDescription based on Bidirectional Recurrent Neural Networks with Double-Embeddings. InProceedings of the First Conference on Machine Translation , pp. 655–659. Association for Com-putational Linguistics, 2016.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Wordswith Subword Units. In ACL, 2016.Kashif Shah, Josiah Wang, and Lucia Specia. SHEF-Multimodal: Grounding Machine Transla-tion on Images. In Proceedings of the First Conference on Machine Translation , pp. 660–665.Association for Computational Linguistics, 2016.Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale ImageRecognition. CoRR , abs/1409.1556, 2014.Lucia Specia, Stella Frank, Khalil Sima ʟan, and Desmond Elliott. A shared Task on MultimodalMachine Translation and Crosslingual Image Description. In Proceedings of the First Conferenceon Machine Translation, Berlin, Germany. Association for Computational Linguistics , 2016.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net-works. In NIPS , 2014.Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for NeuralMachine Translation. In ACL, 2016.Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov,Richard S Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Gener-ation with Visual Attention. In CVPR , 2015.Biao Zhang, Deyi Xiong, and Jinsong Su. Variational Neural Machine Translation. In EMNLP ,2016.10Under review as a conference paper at ICLR 2017A D ERIVATION OF LOWER BOUNDSThe lower bound of our model can be derived as follows:p(yjx) =∫p(y;zjx)dz=∫p(zjx)p(yjz;x)dzlogp(yjx) = log∫q(zjx;y;)p(zjx)p(yjz;x)q(zjx;y;)dz∫q(zjx;y;) logp(zjx)p(yjz;x)q(zjx;y;)dz=∫q(zjx;y;)(logp(zjx)q(zjx;y)+ log p(yjz;x))dz=DKL[q(zjx;y;)jjp(zjx)] +Eq(zjx;y;)[logp(yjz;x)]=LB C ONDITIONAL GRUConditional GRU is implemented in dl4mt .Caglayan et al. (2016 ) extends Conditional GRU tomake it capable of receiving image information as input. The first GRU computes intermediaterepresentation s′jass′j= (1o′j)⊙s′j+o′j⊙sj1 (11)s′j= tanh( W′E[yj1] +r′j⊙(U′sj1)) (12)r′j=(W′rE[yj1] +U′rsj1) (13)o′j=(W′oE[yj1] +U′osj1) (14)where E2Rdembdtsignifies the target word embedding, s′j2Rdhdenotes the hidden state,r′j2Rdhando′j2Rdhrespectively represent the reset and update gate activations. dtstands for thedimension of target; the unique number of target words. [W′; W′r; W′o]2Rdhdemb;[U′; U′r; U′o]2Rdhdhare the parameters to be learned.Context vector cjis obtained ascj= tanh0@Tf∑i=1ijhi1A (15)ij=exp(eij)∑Tfk=1exp(ekj)(16)eij=Uatttanh( Wcatthi+Watts′j) (17)where [Uatt; Wcatt; Watt]2Rdhdhare the parameters to be learned.The second GRU computes sjfrom s′j,cjandh′eassj= (1o′j)⊙sj+oj⊙s′j (18)sj= tanh( Wcj+rj⊙(Us′j) +Vh′e) (19)rj=(Wrcj+Urs′j+Vrh′e) (20)oj=(Wocj+Uos′j+Voh′e) (21)where sj2Rdhstands for the hidden state, rj2Rdhandoj2Rdhare the reset and updategate activations. [W; W r; Wo]2Rdhdh;[U; U r; Uo]2Rdhdh;[V; V r; Vo]2Rdhdzare the11Under review as a conference paper at ICLR 2017parameters to be learned. We introduce h′eobtained from a latent variable here so that a latentvariable can affect the representation sjthrough GRU units.Finally, the probability of yis computed asuj=Lutanh( E[yj1] +Lssj+Lxcj) (22)P(yjjyj1;sj;cj) = Softmax( uj) (23)where Lu2Rdtdemb,Ls2RdembdhandLc2Rdembdhare the parameters to be learned.C T RAINING DETAILC.1 H YPERPARAMETERSTable 2presents parameters that we use in the experiments.Table 2: Hyperparameters. The name is the variable name of dl4mt except for dimv anddimpic,which are the dimension of the latent variables and image embeddings. We set dim(number ofLSTM unit size) and dimword (dimensions of word embeddings) 256, batchsize 32,maxlen (maxoutput length) 50 and lr(learning rate) 1.0 for all models. decay-c is weights on L2 regularization.dimv dimpic decay-cNMT - 256 0.001VNMT 256 256 0.0005Our Model G 256 512 0.001G+O-A VG 256 256 0.0005G+O-RNN 256 256 0.0005G+O-TXT 256 256 0.0005We found that Multi30k dataset is easy to overfit. Figure 8and Figure 9present training cost and val-idation METEOR score graph of the two experimental settings of the NMT model. Table 3presentsthe hyperparameters which were used in the experiments. Large decay-c ans small batchsize give thebetter METEOR scores in the end. Training is stopped if there is no validation cost improvementsover the last 10 validations.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costTraining cost12Figure 8: NMT Training Cost0 10 20 30 40 50 60 70iteration (x 1000)0102030405060METEORValidation METEOR12 Figure 9: NMT Validation METEOR scoreTable 3: Hyperparameters using the experiments in the Figure 8and9dim dim word lr decay-c maxlen batchsie1256 256 1.0 0.0005 30 1282256 256 1.0 0.001 50 3212Under review as a conference paper at ICLR 2017Figure 10presents the English word length histogram of the Multi30k test dataset. Most sentencesin the Multi30k are less than 20 words. We assume that this is one of the reasons why Multi30k iseasy to overfit.0 5 10 15 20 25 30 35Source Sentence Word Length0100200300400500600NumberFigure 10: Word Length Histogram of the Multi30k Test DatasetC.2 COST GRAPHFigure 11and12present the training cost and validation cost graph of each models. Please note thatVNMT fine-tuned NMT, and other models fine-tuned VNMT.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costNMTcost(a) NMT0 5000 10000 15000 20000 25000 30000iteration020406080100klcostVNMTklcostcost (b) VNMT0 5000 10000 15000 20000 25000iteration020406080100klcostGklcostcost (c) G0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-AVGklcostcost(d) G+O-A VG0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-RNNklcostcost (e) G+O-RNN0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-TXTklcostcost (f) G+O-TXTFigure 11: Training costC.3 T RANSLATION EXAMPLESWe present some selected translations from VNMT and our proposed model (G). As of translation3 to 5 our model give the better METEOR scores than VNMT and as of translation 6 to 8 VNMTgive the better METEOR scores than our models.13Under review as a conference paper at ICLR 20170 10 20 30 40 50 60 70iteration (x 1000) 01020304050607080costNMTcost(a) NMT0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costVNMTcost (b) VNMT0 5 10 15 20 25iteration (x 1000) 01020304050607080costGcost (c) G0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-AVGcost(d) G+O-A VG0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-RNNcost (e) G+O-RNN0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-TXTcost (f) G+O-TXTFigure 12: Validation costSource two boys inside a fence jump in the air while holding a basketball.Truth zwei jungen innerhalb eines zaunes springen in die luft und halten dabei einen basketball.VNMT zwei jungen in einem zaun springen in die luft, w ̈ahrend sie einen basketball h ̈alt.Our Model (G) zwei jungen in einem zaun springen in die luft und halten dabei einen basketball.Figure 13: Translation 314Under review as a conference paper at ICLR 2017Source a dog runs through the grass towards the camera.Truth ein hund rennt durch das gras auf die kamera zu.VNMT ein hund rennt durch das gras in die kamera.Our Model (G) ein hund rennt durch das gras auf die kamera zu.Figure 14: Translation 4Source a couple of men walking on a public city street.Truth einige m ̈anner gehen auf einer ̈offentlichen straße in der stadt.VNMT ein paar m ̈anner gehen auf einer ̈offentlichen stadtstraße.Our Model (G) ein paar m ̈anner gehen auf einer ̈offentlichen straße in der stadt.Figure 15: Translation 515Under review as a conference paper at ICLR 2017Source a bunch of police officers are standing outside a bus.Truth eine gruppe von polizisten steht vor einem bus.VNMT eine gruppe von polizisten steht vor einem bus.Our Model (G) mehrere polizisten stehen vor einem bus.Figure 16: Translation 6Source a man is walking down the sidewalk next to a street.Truth ein mann geht neben einer straße den gehweg entlang.VNMT ein mann geht neben einer straße den b ̈urgersteig entlang.Our Model (G) ein mann geht auf dem b ̈urgersteig an einer straße.Figure 17: Translation 716Under review as a conference paper at ICLR 2017Source a blond-haired woman wearing a blue shirt unwraps a hat.Truth eine blonde frau in einem blauen t-shirt packt eine m ̈utze aus.VNMT eine blonde frau in einem blauen t-shirt wirft einen hut.Our Model (G) eine blonde frau tr ̈agt ein blaues hemd und einen hut.Figure 18: Translation 817
BJ2wTaWNx
B1G9tvcgx
ICLR.cc/2017/conference/-/paper412/official/review
{"title": "Unclear motivation & unconvincing results", "rating": "3: Clear rejection", "review": "I have problems understanding the motivation of this paper. The authors claimed to have captured a latent representation of text and image during training and can translate better without images at test time, but didn't demonstrate convincingly that images help (not to mention the setup is a bit strange when there are no images at test time). What I see are only speculative comments: \"we observed some gains, so these should come from our image models\". The qualitative analysis doesn't convince me that the models have learned latent representations; I am guessing the gains are due to less overfitting because of the participation of images during training. \n\nThe dataset is too small to experiment with NMT. I'm not sure if it's fair to compare their models with NMT and VNMT given the following description in Section 4.1 \"VNMT is fine-tuned by NMT and our models are fine-tuned with VNMT\". There should be more explanation on this.\n\nBesides, I have problems with the presentation of this paper.\n(a) There are many symbols being used unnecessary. For example: f & g are used for x (source) and y (target) in Section 3.1. \n(b) The ' symbol is not being used in a consistent manner, making it sometimes hard to follow the paper. For example, in section 3.1.2, there are references about h'_\\pi obtained from Eq. (3) which is about h_\\pi (yes, I understand what the authors mean, but there can be better ways to present that).\n(c) I'm not sure if it's correct in Section 3.2.2 h'_z is computed from \\mu and \\sigma. So how \\mu' and \\sigma' are being used ?\n(d) G+O-AVG should be something like G+O_{AVG}. The minus sign makes it looks like there's an ablation test there. Similarly for other symbols.\n\nOther things: no explanations for Figure 2 & 3. There's a missing \\pi symbol in Appendix A before the KL derivation.\n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Neural Machine Translation with Latent Semantic of Image and Text
["Joji Toyama", "Masanori Misono", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo"]
Although attention-based Neural Machine Translation have achieved great success, attention-mechanism cannot capture the entire meaning of the source sentence because the attention mechanism generates a target word depending heavily on the relevant parts of the source sentence. The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation. We follow this approach and we believe that the capturing meaning of sentence benefits from image information because human beings understand the meaning of language not only from textual information but also from perceptual information such as that gained from vision. As described herein, we propose a neural machine translation model that introduces a continuous latent variable containing an underlying semantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conducted with an English–German translation task show that our model outperforms over the baseline.
["neural machine translation", "latent semantic", "image", "entire meaning", "source sentence", "meaning", "image information", "model", "text"]
https://openreview.net/forum?id=B1G9tvcgx
https://openreview.net/pdf?id=B1G9tvcgx
https://openreview.net/forum?id=B1G9tvcgx&noteId=BJ2wTaWNx
Under review as a conference paper at ICLR 2017NEURAL MACHINE TRANSLATION WITH LATENT SE-MANTIC OF IMAGE AND TEXTJoji Toyama, Masanori Misonoy, Masahiro Suzuki, Kotaro Nakayama & Yutaka MatsuoGraduate School of Engineering,yGraduate School of Information Science and TechnologyThe University of TokyoHongo, Tokyo, Japanftoyama,misono,masa,k-nakayama,matsuo g@weblab.t.u-tokyo.ac.jpABSTRACTAlthough attention-based Neural Machine Translation have achieved great suc-cess, attention-mechanism cannot capture the entire meaning of the source sen-tence because the attention mechanism generates a target word depending heavilyon the relevant parts of the source sentence. The report of earlier studies has in-troduced a latent variable to capture the entire meaning of sentence and achievedimprovement on attention-based Neural Machine Translation. We follow this ap-proach and we believe that the capturing meaning of sentence benefits from im-age information because human beings understand the meaning of language notonly from textual information but also from perceptual information such as thatgained from vision. As described herein, we propose a neural machine transla-tion model that introduces a continuous latent variable containing an underlyingsemantic extracted from texts and images. Our model, which can be trained end-to-end, requires image information only when training. Experiments conductedwith an English–German translation task show that our model outperforms overthe baseline.1 I NTRODUCTIONNeural machine translation (NMT) has achieved great success in recent years ( Sutskever et al. ,2014 ;Bahdanau et al. ,2015 ). In contrast to statistical machine translation, which requires huge phrase andrule tables, NMT requires much less memory. However, the most standard model, NMT with at-tention ( Bahdanau et al. ,2015 ) entails the shortcoming that the attention mechanism cannot capturethe entire meaning of a sentence because it generates a target word while depending heavily onthe relevant parts of the source sentence ( Tu et al. ,2016 ). To overcome this problem, VariationalNeural Machine Translation (VNMT), which outperforms NMT with attention introduces a latentvariable to capture the underlying semantic from source and target ( Zhang et al. ,2016 ). We followthe motivation of VNMT, which is to capture underlying semantic of a source.Image information is related to language. For example, we human beings understand the meaningof language by linking perceptual information given by the surrounding environment and language(Barsalou ,1999 ). Although it is natural and easy for humans, it is difficult for computers to un-derstand different domain’s information integrally. Solving this difficult task might, however, bringgreat improvements in natural language processing. Several researchers have attempted to link lan-guage and images such as image captioning by Xu et al. (2015 ) or image generation from sentencesbyReed et al. (2016 ). They described the possibility of integral understanding of images and text. Inmachine translation, we can expect an improvement using not only text information but also imageinformation because image information can bridge two languages.As described herein, we propose the neural machine translation model which introduces a latentvariable containing an underlying semantic extracted from texts and images. Our model includes anexplicit latent variable z, which has underlying semantics extracted from text and images by intro-ducing a Variational Autoencoder (V AE) ( Kingma et al. ,2014 ;Rezende et al. ,2014 ). Our model,First two authors contributed equally.1Under review as a conference paper at ICLR 2017h"h"h#h$h%h#h$h%h"&h"&h#&h$&h#&h$&'(#("($(%)#)")$h*&h*+log/#010"0#0$)#)")$h2&h2 Figure 1: Architecture of Proposed Model.Green dotted lines denote that and encoded yare used only when training.which can be trained end-to-end, requires image information only when training. As describedherein, we tackle the task with which one uses a parallel corpus and images in training, while usinga source corpus in translating. It is important to define the task in this manner because we rarelyhave a corresponding image when we want to translate a sentence. During translation, our modelgenerates a semantic variable zfrom a source, integrates variable zinto a decoder of neural machinetranslation system, and then finally generates the translation. The difference between our model andVNMT is that we use image information in addition to text information.For experiments, we used Multi30k ( Elliott et al. ,2016 ), which includes images and the correspond-ing parallel corpora of English and German. Our model outperforms the baseline with two evaluationmetrics: METEOR ( Denkowski & Lavie ,2014 ) and BLEU ( Papineni et al. ,2002 ). Moreover, weobtain some knowledge related to our model and Multi30k. Finally, we present some examples inwhich our model either improved, or worsened, the result.Our paper contributes to the neural machine translation research community in three ways.We present the first neural machine translation model to introduce a latent variable inferredfrom image and text information. We also present the first translation task with which oneuses a parallel corpus and images in training, while using a source corpus in translating.Our translation model can generate more accurate translation by training with images, es-pecially for short sentences.We present how the translation of source is changed by adding image information comparedto VNMT which does not use image information.2 B ACKGROUNDOur model is the extension of Variational Neural Machine Translation (VNMT) ( Zhang et al. ,2016 ).Our model is also viewed as one of the multimodal translation models. In our model, V AE is usedto introduce a latent variable. We describe the background of our model in this section.2.1 V ARIATIONAL NEURAL MACHINE TRANSLATIONThe VNMT translation model introduces a latent variable. This model’s architecture shown in Figure1excludes the arrow from . This model involves three parts: encoder, inferrer, and decoder. Inthe encoder, both the source and target are encoded by bidirectional-Recurrent Neural Networks(bidirectional-RNN) and a semantic representation is generated. In the inferrer, a latent variable zis2Under review as a conference paper at ICLR 2017modeled from a semantic representation by introducing V AE. In the decoder, a latent variable zisintegrated in the Gated Recurrent Unit (GRU) decoder; also, a translation is generated.Our model is followed by architecture, except that the image is also encoded to obtain a latentvariable z.2.2 M ULTIMODAL TRANSLATIONMultimodal Translation is the task with which one might one can use a parallel corpus and images.The first papers to study multimodal translation are Elliott et al. (2015 ) and Hitschler & Riezler(2016 ). It was selected as a shared task in Workshop of Machine Translation 2016 (WMT161). Al-though several studies have been conducted ( Caglayan et al. ,2016 ;Huang et al. ,2016 ;Calixto et al. ,2016 ;Libovick ́y et al. ,2016 ;Rodr ́ıguez Guasch & Costa-juss `a,2016 ;Shah et al. ,2016 ), they do notshow great improvement, especially in neural machine translation ( Specia et al. ,2016 ). Here, we in-troduce end-to-end neural network translation models like our model.Caglayan et al. (2016 ) integrate an image into an NMT decoder. They simply put source contextvectors and image feature vectors extracted from ResNet-50’s ‘res4f relu’ layer ( He et al. ,2016 )into the decoder called multimodal conditional GRU. They demonstrate that their method does notsurpass the text-only baseline: NMT with attention.Huang et al. (2016 ) integrate an image into a head of source words sequence. They extract prominentobjects from the image by Region-based Convolutional Neural Networks (R-CNN) ( Girshick ,2015 ).Objects are then converted to feature vectors by VGG-19 ( Simonyan & Zisserman ,2014 ) and areput into a head of source words sequence. They demonstrate that object extraction by R-CNNcontributes greatly to the improvement. This model achieved the highest METEOR score in NMT-based models in WMT16, which we compare to our model in the experiment. We designate thismodel as CMU.Caglayan et al. (2016 ) argue that their proposed model did not achieve improvement because theyfailed to benefit from both text and images. We assume that they failed to integrate text and imagesbecause they simply put images and text into neural machine translation despite huge gap existsbetween image information and text information. Our model, however, presents the possibility ofbenefitting from images and text because text and images are projected to their common semanticspace so that the gap of images and text would be filled.2.3 V ARIATIONAL AUTO ENCODERV AE was proposed in an earlier report of the literature Kingma et al. (2014 );Rezende et al. (2014 ).Given an observed variable x, V AE introduces a continuous latent variable z, with the assump-tion that xis generated from z. V AE incorporates p(xjz)andqφ(zjx)into an end-to-end neuralnetwork. The lower bound is shown below.LVAE =DKL[qφ(zjx)jjp(z)] +Eqφ(zjx)[logp(xjz)]logp(x) (1)3 N EURAL MACHINE TRANSLATION WITHLATENT SEMANTIC OFIMAGEANDTEXTWe propose a neural machine translation model which explicitly has a latent variable containing anunderlying semantic extracted from both text and image. This model can be seen as an extension ofVNMT by adding image information.Our model can be drawn as a graphical model in Figure 3. Its lower bound isL=DKL[qφ(zjx;y;)jjp(zjx)] +Eqφ(zjx;y;)[logp(yjz;x)]; (2)where x;y;;zrespectively denote the source, target, image and latent variable, and pandqφre-spectively denote the prior distribution and the approximate posterior distribution. It is noteworthy inEq. ( 2) that we want to model p(zjx;y;), which is intractable. Therefore we model qφ(zjx;y;)1http://www.statmt.org/wmt16/3Under review as a conference paper at ICLR 2017zx yzx yFigure 2: VNMTzx yzx yFigure 3: Our modelinstead, and also model prior p(zjx)so that we can generate a translation from the source in testing.Derivation of the formula is presented in the appendix.We model all distributions in Eq. ( 2) by neural networks. Our model architecture is divisible intothree parts: 1) encoder, 2) inferrer, and 3) decoder.3.1 E NCODERIn the encoder, the semantic representation heis obtained from the image, source, and target. Wepropose several methods to encode an image. We show how these methods affect the translationresult in the Experiment section. This representation is used in the inferrer. This section links to thegreen part of Figure 1.3.1.1 TEXT ENCODINGThe source and target are encoded in the same way as Bahdanau et al. (2015 ). The source is con-verted to a sequence of 1-of-k vector and is embedded to dembdimensions. We designate it as thesource sequence. Then, a source sequence is put into bidirectional RNN. Representation hiis ob-tained by concatenating ⃗hiand ⃗hi:⃗hi= RNN( ⃗hi1; Ewi);⃗hi= RNN( ⃗hi+1; Ewi);hi= [⃗hi;⃗hi],where Ewiis the embedded word in a source sentence, hi2Rdh, and ⃗hi;⃗hi2Rdh2. It is conductedthrough i= 0 toi=Tf, where Tfis the sequence length. GRU is implemented in bidirectionalRNN so that it can attain long-term dependence. Finally, we conduct mean-pooling to hiand obtainthe source representation vector as hf=1Tf∑Tfihi. The exact same process is applied to target toobtain target representation hg.3.1.2 IMAGE ENCODING AND SEMANTIC REPRESENTATIONWe use Convolutional Neural Networks (CNN) to extract feature vectors from images. We proposeseveral ways of extracting image features.Global (G) The image feature vector is extracted from the image using a CNN. With this method,we use a feature vector in the certain layer as . Then is encoded to the image represen-tation vector hsimply by affine transformation ash=W+bwhere W2Rddfc7; b2Rd: (3)Global and Objects (G+O) First we extract some prominent objects from images in some way.Then, we obtain fc7 image feature vectors from the original image and extracted objectsusing a CNN. Therefore takes a variable length. We handle in two ways: average andRNN encoder.In average ( G+O-A VG ), we first obtain intermediate image representation vector h′byaffine transformation in Eq. ( 3). Then, the average of h′becomes the image representationvector: h=∑lih′il, where lis the length of h′.4Under review as a conference paper at ICLR 2017In RNN encoder ( G+O-RNN ), we first obtain h′by affine transformation in Eq. ( 3). Then,we encode h′in the same way as we encode text in Section 3.1.1 to obtain h.Global and Objects into source and target (G+O-TXT) Thereby, we first obtain h′by affinetransformation in Eq. ( 3). Then, we put sequential vector h′into the head of the sourcesequence and target sequence. In this case, we set dto be the same dimension as demb. Infact, the source sequence including h′is only used to model qφ(zjx;y;). Context vectorc(Eq. ( 15)) and p(zjx)are computed by a source sequence that does not include h′. Weencode the source sequence including h′as Section 3.1.1 to obtain hfandhg. In this case,his not obtained. Image information is contained in hfandhg.All representation vectors hf,hgandhare concatenated to obtain a semantic representation vectorashe= [hf;hg;h], where he2Rde=2dh+d(in G+O-TXT: he= [hf;hg], where he2Rde=2dh). It is an input of the multimodal variational neural inferrer.3.2 I NFERRERWe model the posterior qφ(zjx;y;)using a neural network and also the prior p(zjx)by neuralnetwork. This section links to the black and grey part of Figure 1.3.2.1 N EURAL POSTERIOR APPROXIMATORModeling the true posterior p(zjx;y;)is usually intractable. Therefore, we consider model-ing of an approximate posterior qφ(zjx;y;)by introducing V AE. We assume that the posteriorqφ(zjx;y;)has the following form:qφ(zjx;y;) =N(z;(x;y;);(x;y;)2I): (4)The mean and standard deviation of the approximate posterior are the outputs of neural net-works.Starting from the variational neural encoder, a semantic representation vector heis projected tolatent semantic space ashz=g(W(1)zhe+b(1)z); (5)where W(1)z2Rdz(de)b(1)z2Rdz.g()is an element-wise activation function, which we set astanh( ). Gaussian parameters of Eq. ( 4) are obtained through linear regression as=Whz+b;log2=Whz+b; (6)where ;log22Rdz.3.2.2 N EURAL PRIOR MODELWe model the prior distribution p(zjx)as follows:p(zjx) = N(z;′(x);′(x)2I):(7)′and′are generated in the same way as that presented in Section 3.2.1 , except for the absenceofyandas inputs. Because of the absence of representation vectors, the dimensions of weight inequation ( 5) for prior model are W′(1)z2Rdzdh;b′(1)z2Rdz. We use a reparameterization trickto obtain a representation of latent variable z:h′z=+ε,ε N (0; I). During translation, h′zisset as the mean of p(zjx). Then, h′zis projected onto the target space ash′e=g(W(2)zh′z+b(2)z)where h′e2Rde: (8)h′eis then integrated into the neural machine translation’s decoder.5Under review as a conference paper at ICLR 20173.3 D ECODERThis section links to the orange part of Figure 1. Given the source sentence xand the latent variablez, decoder defines the probability over translation yasp(yjz;x) =T∏j=1p(yjjy<j;z;x): (9)How we define the probability over translation yis fundamentally the same as VNMT, except forusing conditional GRU instead of GRU. Conditional GRU involves two GRUs and an attentionmechanism. We integrate a latent variable zinto the second GRU. We describe it in the appendix.3.4 M ODEL TRAININGMonte Carlo sampling method is used to approximate the expectation over the posterior Eq. ( 2),Eqφ(zjx;y;)1L∑Ll=1logp(yjx;h(l)z), where Lis the number of samplings. The training objec-tive is defined asL(; φ) =DKL[qφ(zjx;y;)jjp(zjx)] +1LL∑l=1T∑j=1logp(yjjy<j;x;h(l)z); (10)where hz=+ε,ε N (0; I). The first term, KL divergence, can be computed analyticallyand is differentiable because both distributions are Gaussian. The second term is also differentiable.We set Las 1. Overall, the objective Lis differentiable. Therefore, we can optimize the parameterand variational parameter φusing gradient ascent techniques.4 E XPERIMENTS4.1 E XPERIMENTAL SETUPWe used Multi30k ( Elliott et al. ,2016 ) as the dataset. Multi30k have an English description and aGerman description for each corresponding image. We handle 29,000 pairs as training data, 1,014pairs as validation data, and 1,000 pairs as test data.Before training, punctuation normalization and lowercase are applied to both English and Germansentences by Moses ( Koehn et al. ,2007 ) scripts2. Compound-word splitting is conducted only toGerman sentences using Sennrich et al. (2016 )3. Then we tokenize sentences2and use them astraining data. We produce vocabulary dictionaries from training data. The vocabulary becomes10,211 words for English and 13,180 words for German after compound-word splitting.Image features are extracted using VGG-19 CNN ( Simonyan & Zisserman ,2014 ). We use 4096-dimensional fc7 features. To extract the object’s region, we use Fast R-CNN ( Girshick ,2015 ). FastR-CNN is trained on ImageNet and MSCOCO dataset4.All weights are initialized by N(0;0:01I). We use the adadelta algorithm as an optimization method.The hyperparameters used in the experiment are presented in the Appendix. All models are trainedwith early stopping. When training, VNMT is fine-tuned by NMT model and our models are fine-tuned using VNMT. When translating, we use beam-search. The beam-size is set as 12. Beforeevaluation, we restore split words to the original state and de-tokenize2generated sentences.We implemented proposed models based on dl4mt5. Actually, dl4mt is fundamentally the samemodel as Bahdanau et al. (2015 ), except that its decoder employs conditional GRU6. We imple-mented VNMT also with conditional GRU so small difference exists between our implementation2https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/ fnormalize-punctuation, low-ercase, tokenizer, detokenizer g.perl3https://github.com/rsennrich/subword-nmt4https://github.com/rbgirshick/fast-rcnn/tree/coco5https://github.com/nyu-dl/dl4mt-tutorial6The architecture is described at https://github.com/nyu-dl/dl4mt-tutorial/blob/master/docs/cgru.pdf6Under review as a conference paper at ICLR 2017and originally proposed VNMT which employs normal GRU as a decoder. We evaluated resultsbased on METEOR and BLUE using MultEval7.4.2 R ESULTTable 1presents experiment results. It shows that our models outperforms the baseline in bothMETEOR and BLEU. Figure 4shows the plot of METEOR score of baselines and our modelsmodels in validation. Figure 5shows the plot of METEOR score and the source sentence length.Table 1: Evaluation Result on Multi30k dataset (English–German). The scores in parentheses arecomputed with ‘-norm’ parameter. NMT is dl4mt ’s NMT (in the session3 directory). The score ofthe CMU is from ( Huang et al. ,2016 ).METEOR " BLEU "val test val testNMT 51.5 (55.8) 50.5 (54.9) 35.8 33.1VNMT 52.2 (56.3) 51.1 (55.3) 37.0 34.9CMU - (-) - (54.1) - -Our Model G 50.6 (54.8) 52.4 (56.0) 34.5 36.5G+O-A VG 51.8 (55.8) 51.8 (55.8) 35.7 35.8G+O-RNN 51.8 (56.1) 51.0 (55.4) 35.9 34.9G+O-TXT 52.6 (56.8) 51.7 (56.0) 36.6 35.14.3 Q UANTITATIVE ANALYSISTable 1shows that G scores the best in proposed models. In G, we simply put the feature of theoriginal image. Actually, proposed model does not benefit from R-CNN, presumably because wecan not handle sequences of image features very well. For example, G+O-A VG uses the average ofmultiple image features, but it only makes the original image information unnecessarily confusing.Figure 4shows that G and G+O-A VG outperforms VNMT almost every time, but all model scoresincrease suddenly in the 17,000 iteration validation. We have no explanation for this behavior.Figure 4also shows that G and G+O-A VG scores fluctuate more moderately than others. We statethat G and G+O-A VG gain stability by adding image information. When one observes the differencebetween the test score and the validation score for each model, baseline scores decrease more thanproposed model scores. Especially, the G score increases in the test, simply because proposedmodels produce a better METEOR score on average, as shown in Figure 4.Figure 5shows that G and G+O-A VG make more improvements on baselines in short sentencesthan in long sentences, presumably because qφ(zjx;y;)can model zwell when a sentence isshort. Image features always have the same dimension, but underlying semantics of the image andtext differ. We infer that when the sentence is short, image feature representation can afford toapproximate the underlying semantic, but when a sentence is long, image feature representation cannot approximate the underlying semantic.Multi30k easily becomes overfitted, as shown in Figure 8and9in the appendix. This is presumablybecause 1) Multi30k is the descriptions of image, making the sentences short and simple, and 2)Multi30k has 29,000 sentences, which could be insufficient. In the appendix, we show how theparameter setting affects the score. One can see that decay-c has a strong effect. Huang et al.(2016 ) states that their proposed model outperforms the baseline (NMT), but we do not have thatobservation. It can be assumed that their baseline parameters are not well tuned.4.4 Q UALITATIVE ANALYSISWe presented the top 30 sentences, which make the largest METEOR score difference between Gand VNMT, to native speakers of German and get the overall comments. They were not informed of7https://github.com/jhclark/multeval, we use meteor1.5 instead of meteor1.4, which is the default ofMultEval .7Under review as a conference paper at ICLR 20170 5 10 15 20 25 30iteration (x 1000)4045505560METEORValidation METEOR ScorenmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 4: METEOR score to the validationdata which are calculated for each 1000 itera-tions.10 15 20 25 30Source Sentence Word Length3540455055METEORTest METEOR score w.r.t. source word lengthnmtvnmtGG+O-AVGG+O-RNNG+O-TXTFigure 5: METEOR score on different groupsof the source sentence length.our model training with image in addition to text. These comments are summarized into two generalremarks. One is that G translates the meaning of the source material more accurately than VNMT.The other is that our model has more grammatical errors as prepositions’ mistakes or missing verbscompared to VNMT. We assume these two remarks are reasonable because G is trained with imageswhich mainly have a representation of noun rather than verb, therefore can capture the meaning ofmaterials in sentence.Figure 6presents the translation results and the corresponding image which G translates more ac-curately than VNMT in METEOR. Figure 7presents the translation results and the correspondingimage which G translates less accurately than VNMT in METEOR. Again, we note that our modeldoes not use image during translating. In Figure 6, G translates ”a white and black dog” correctlywhile VNMT translates it incorrectly implying ”a white dog and a black dog”. We assume that Gcorrectly translates the source because G captures the meaning of material in the source. In Figure7, G incorrectly translates the source. Its translation result is missing the preposition meaning ”at”,which is hardly represented in image.We present more translation examples in appendix.Source a woman holding a white and black dog.Truth eine frau h ̈alt einen weiß-schwarzen hund.VNMT eine frau h ̈alt einen weißen und schwarzen hund.Our Model (G) eine frau h ̈alt einen weiß-schwarzen hund.Figure 6: Translation 18Under review as a conference paper at ICLR 2017Source a group of people running a marathon in the winter.Truth eine gruppe von menschen l ̈auft bei einem marathon im winter.VNMT eine gruppe von menschen l ̈auft bei einem marathon im winter.Our Model (G) eine gruppe leute l ̈auft einen marathon im winter an.Figure 7: Translation 25 C ONCLUSIONAs described herein, we proposed the neural machine translation model that explicitly has a latentvariable that includes underlying semantics extracted from both text and images. Our model outper-forms the baseline in both METEOR and BLEU scores. Experiments and analysis present that ourmodel can generate more accurate translation for short sentences. In qualitative analysis, we presentthat our model can translate nouns accurately while our model make grammatical errors.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. In ICLR , 2015.Lawrence W. Barsalou. Perceptual symbol Systems. Behavioral and Brain Sciences , 22:577–609,1999.Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garc ́ıa-Mart ́ınez, FethiBougares, Lo ̈ıc Barrault, and Joost van de Weijer. Does Multimodality Help Human and Ma-chine for Translation and Image Captioning? In WMT , 2016.Iacer Calixto, Desmond Elliott, and Stella Frank. DCU-UvA Multimodal MT System Report. InProceedings of the First Conference on Machine Translation , pp. 634–638. Association for Com-putational Linguistics, 2016.Michael Denkowski and Alon Lavie. Meteor Universal: Language Specific Translation Evaluationfor Any Target Language. In Proceedings of the EACL 2014 Workshop on Statistical MachineTranslation , 2014.D. Elliott, S. Frank, and E. Hasler. Multilingual Image Description with Neural Sequence Models.ArXiv e-prints , 2015.Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. Multi30K: Multilingual English-German Image Descriptions. CoRR , abs/1605.00459, 2016.Ross Girshick. Fast R-CNN. In ICCV , 2015.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for ImageRecognition. In CVPR , 2016.9Under review as a conference paper at ICLR 2017Julian Hitschler and Stefan Riezler. Multimodal Pivots for Image Caption Translation. arXiv preprintarXiv:1601.03916 , 2016.Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. Attention-based Multi-modal Neural Machine Translation. In WMT , 2016.Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedLearning with Deep Generative Models. In NIPS , 2014.Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, NicolaBertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond ˇrej Bojar,Alexandra Constantin, and Evan Herbst. Moses: Open Source Toolkit for Statistical MachineTranslation. In ACL, 2007.Jindˇrich Libovick ́y, Jind ˇrich Helcl, Marek Tlust ́y, Ond ˇrej Bojar, and Pavel Pecina. CUNI System forWMT16 Automatic Post-Editing and Multimodal Translation Tasks. In Proceedings of the FirstConference on Machine Translation , pp. 646–654. Association for Computational Linguistics,2016.Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A Method for AutomaticEvaluation of Machine Translation. In ACL, 2002.Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative Adversarial Text to Image Synthesis. In ICML , 2016.Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approx-imate Inference in Deep Generative Models. In ICML , 2014.Sergio Rodr ́ıguez Guasch and Marta R. Costa-juss `a. WMT 2016 Multimodal Translation SystemDescription based on Bidirectional Recurrent Neural Networks with Double-Embeddings. InProceedings of the First Conference on Machine Translation , pp. 655–659. Association for Com-putational Linguistics, 2016.Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Wordswith Subword Units. In ACL, 2016.Kashif Shah, Josiah Wang, and Lucia Specia. SHEF-Multimodal: Grounding Machine Transla-tion on Images. In Proceedings of the First Conference on Machine Translation , pp. 660–665.Association for Computational Linguistics, 2016.Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale ImageRecognition. CoRR , abs/1409.1556, 2014.Lucia Specia, Stella Frank, Khalil Sima ʟan, and Desmond Elliott. A shared Task on MultimodalMachine Translation and Crosslingual Image Description. In Proceedings of the First Conferenceon Machine Translation, Berlin, Germany. Association for Computational Linguistics , 2016.Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net-works. In NIPS , 2014.Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for NeuralMachine Translation. In ACL, 2016.Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov,Richard S Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Gener-ation with Visual Attention. In CVPR , 2015.Biao Zhang, Deyi Xiong, and Jinsong Su. Variational Neural Machine Translation. In EMNLP ,2016.10Under review as a conference paper at ICLR 2017A D ERIVATION OF LOWER BOUNDSThe lower bound of our model can be derived as follows:p(yjx) =∫p(y;zjx)dz=∫p(zjx)p(yjz;x)dzlogp(yjx) = log∫q(zjx;y;)p(zjx)p(yjz;x)q(zjx;y;)dz∫q(zjx;y;) logp(zjx)p(yjz;x)q(zjx;y;)dz=∫q(zjx;y;)(logp(zjx)q(zjx;y)+ log p(yjz;x))dz=DKL[q(zjx;y;)jjp(zjx)] +Eq(zjx;y;)[logp(yjz;x)]=LB C ONDITIONAL GRUConditional GRU is implemented in dl4mt .Caglayan et al. (2016 ) extends Conditional GRU tomake it capable of receiving image information as input. The first GRU computes intermediaterepresentation s′jass′j= (1o′j)⊙s′j+o′j⊙sj1 (11)s′j= tanh( W′E[yj1] +r′j⊙(U′sj1)) (12)r′j=(W′rE[yj1] +U′rsj1) (13)o′j=(W′oE[yj1] +U′osj1) (14)where E2Rdembdtsignifies the target word embedding, s′j2Rdhdenotes the hidden state,r′j2Rdhando′j2Rdhrespectively represent the reset and update gate activations. dtstands for thedimension of target; the unique number of target words. [W′; W′r; W′o]2Rdhdemb;[U′; U′r; U′o]2Rdhdhare the parameters to be learned.Context vector cjis obtained ascj= tanh0@Tf∑i=1ijhi1A (15)ij=exp(eij)∑Tfk=1exp(ekj)(16)eij=Uatttanh( Wcatthi+Watts′j) (17)where [Uatt; Wcatt; Watt]2Rdhdhare the parameters to be learned.The second GRU computes sjfrom s′j,cjandh′eassj= (1o′j)⊙sj+oj⊙s′j (18)sj= tanh( Wcj+rj⊙(Us′j) +Vh′e) (19)rj=(Wrcj+Urs′j+Vrh′e) (20)oj=(Wocj+Uos′j+Voh′e) (21)where sj2Rdhstands for the hidden state, rj2Rdhandoj2Rdhare the reset and updategate activations. [W; W r; Wo]2Rdhdh;[U; U r; Uo]2Rdhdh;[V; V r; Vo]2Rdhdzare the11Under review as a conference paper at ICLR 2017parameters to be learned. We introduce h′eobtained from a latent variable here so that a latentvariable can affect the representation sjthrough GRU units.Finally, the probability of yis computed asuj=Lutanh( E[yj1] +Lssj+Lxcj) (22)P(yjjyj1;sj;cj) = Softmax( uj) (23)where Lu2Rdtdemb,Ls2RdembdhandLc2Rdembdhare the parameters to be learned.C T RAINING DETAILC.1 H YPERPARAMETERSTable 2presents parameters that we use in the experiments.Table 2: Hyperparameters. The name is the variable name of dl4mt except for dimv anddimpic,which are the dimension of the latent variables and image embeddings. We set dim(number ofLSTM unit size) and dimword (dimensions of word embeddings) 256, batchsize 32,maxlen (maxoutput length) 50 and lr(learning rate) 1.0 for all models. decay-c is weights on L2 regularization.dimv dimpic decay-cNMT - 256 0.001VNMT 256 256 0.0005Our Model G 256 512 0.001G+O-A VG 256 256 0.0005G+O-RNN 256 256 0.0005G+O-TXT 256 256 0.0005We found that Multi30k dataset is easy to overfit. Figure 8and Figure 9present training cost and val-idation METEOR score graph of the two experimental settings of the NMT model. Table 3presentsthe hyperparameters which were used in the experiments. Large decay-c ans small batchsize give thebetter METEOR scores in the end. Training is stopped if there is no validation cost improvementsover the last 10 validations.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costTraining cost12Figure 8: NMT Training Cost0 10 20 30 40 50 60 70iteration (x 1000)0102030405060METEORValidation METEOR12 Figure 9: NMT Validation METEOR scoreTable 3: Hyperparameters using the experiments in the Figure 8and9dim dim word lr decay-c maxlen batchsie1256 256 1.0 0.0005 30 1282256 256 1.0 0.001 50 3212Under review as a conference paper at ICLR 2017Figure 10presents the English word length histogram of the Multi30k test dataset. Most sentencesin the Multi30k are less than 20 words. We assume that this is one of the reasons why Multi30k iseasy to overfit.0 5 10 15 20 25 30 35Source Sentence Word Length0100200300400500600NumberFigure 10: Word Length Histogram of the Multi30k Test DatasetC.2 COST GRAPHFigure 11and12present the training cost and validation cost graph of each models. Please note thatVNMT fine-tuned NMT, and other models fine-tuned VNMT.0 10000 20000 30000 40000 50000 60000 70000iteration020406080100costNMTcost(a) NMT0 5000 10000 15000 20000 25000 30000iteration020406080100klcostVNMTklcostcost (b) VNMT0 5000 10000 15000 20000 25000iteration020406080100klcostGklcostcost (c) G0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-AVGklcostcost(d) G+O-A VG0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-RNNklcostcost (e) G+O-RNN0 5000 10000 15000 20000 25000iteration020406080100klcostG+O-TXTklcostcost (f) G+O-TXTFigure 11: Training costC.3 T RANSLATION EXAMPLESWe present some selected translations from VNMT and our proposed model (G). As of translation3 to 5 our model give the better METEOR scores than VNMT and as of translation 6 to 8 VNMTgive the better METEOR scores than our models.13Under review as a conference paper at ICLR 20170 10 20 30 40 50 60 70iteration (x 1000) 01020304050607080costNMTcost(a) NMT0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costVNMTcost (b) VNMT0 5 10 15 20 25iteration (x 1000) 01020304050607080costGcost (c) G0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-AVGcost(d) G+O-A VG0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-RNNcost (e) G+O-RNN0 5 10 15 20 25 30iteration (x 1000) 01020304050607080costG+O-TXTcost (f) G+O-TXTFigure 12: Validation costSource two boys inside a fence jump in the air while holding a basketball.Truth zwei jungen innerhalb eines zaunes springen in die luft und halten dabei einen basketball.VNMT zwei jungen in einem zaun springen in die luft, w ̈ahrend sie einen basketball h ̈alt.Our Model (G) zwei jungen in einem zaun springen in die luft und halten dabei einen basketball.Figure 13: Translation 314Under review as a conference paper at ICLR 2017Source a dog runs through the grass towards the camera.Truth ein hund rennt durch das gras auf die kamera zu.VNMT ein hund rennt durch das gras in die kamera.Our Model (G) ein hund rennt durch das gras auf die kamera zu.Figure 14: Translation 4Source a couple of men walking on a public city street.Truth einige m ̈anner gehen auf einer ̈offentlichen straße in der stadt.VNMT ein paar m ̈anner gehen auf einer ̈offentlichen stadtstraße.Our Model (G) ein paar m ̈anner gehen auf einer ̈offentlichen straße in der stadt.Figure 15: Translation 515Under review as a conference paper at ICLR 2017Source a bunch of police officers are standing outside a bus.Truth eine gruppe von polizisten steht vor einem bus.VNMT eine gruppe von polizisten steht vor einem bus.Our Model (G) mehrere polizisten stehen vor einem bus.Figure 16: Translation 6Source a man is walking down the sidewalk next to a street.Truth ein mann geht neben einer straße den gehweg entlang.VNMT ein mann geht neben einer straße den b ̈urgersteig entlang.Our Model (G) ein mann geht auf dem b ̈urgersteig an einer straße.Figure 17: Translation 716Under review as a conference paper at ICLR 2017Source a blond-haired woman wearing a blue shirt unwraps a hat.Truth eine blonde frau in einem blauen t-shirt packt eine m ̈utze aus.VNMT eine blonde frau in einem blauen t-shirt wirft einen hut.Our Model (G) eine blonde frau tr ̈agt ein blaues hemd und einen hut.Figure 18: Translation 817
r1hN6IU4l
SkYbF1slg
ICLR.cc/2017/conference/-/paper549/official/review
{"title": "", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper presents an information theoretic framework for unsupervised learning. The framework relies on infomax principle, whose goal is to maximize the mutual information between input and output. The authors propose a two-step algorithm for learning in this setting. First, by leveraging an asymptotic approximation to the mutual information, the global objective is decoupled into two subgoals whose solutions can be expressed in closed form. Next, these serve as the initial guess for the global solution, and are refined by the gradient descent algorithm.\n\nWhile the story of the paper and the derivations seem sound, the clarity and presentation of the material could improve. For example, instead of listing step by step derivation of each equation, it would be nice to first give a high-level presentation of the result and maybe explain briefly the derivation strategy. The very detailed aspects of derivations, which could obscure the underlying message of the result could perhaps be postponed to later sections or even moved to an appendix.\n\nA few questions that the authors may want to clarify:\n1. Page 4, last paragraph: \"from above we know that maximizing I(X;R) will result in maximizing I(Y;R) and I(X,Y^U)\". While I see the former holds due to equality in 2.20, the latter is related via a bound in 2.21. Due to the possible gap between I(X;R) and I(X,Y^U), can your claim that maximizing of the former indeed maximizes the latter be true?\n2. Paragraph above section 2.2.2: it is stated that, dropout used to prevent overfitting may in fact be regarded as an attempt to reduce the rank of the weight matrix. No further tip is provided why this should be the case. Could you elaborate on that?\n3. At the end of page 9: \"we will discuss how to get optimal solution of C for two specific cases\". If I understand correctly, you actually are not guaranteed to get the optimal solution of C in either case, and the best you can guarantee is reaching a local optimum. This is due to the nonconvexity of the constraint 2.80 (quadratic equality). If optimality cannot be guaranteed, please correct the wording accordingly.\n", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax
["Wentao Huang", "Kechen Zhang"]
A framework is presented for unsupervised learning of representations based on infomax principle for large-scale neural populations. We use an asymptotic approximation to the Shannon's mutual information for a large neural population to demonstrate that a good initial approximation to the global information-theoretic optimum can be obtained by a hierarchical infomax method. Starting from the initial solution, an efficient algorithm based on gradient descent of the final objective function is proposed to learn representations from the input datasets, and the method works for complete, overcomplete, and undercomplete bases. As confirmed by numerical experiments, our method is robust and highly efficient for extracting salient features from input datasets. Compared with the main existing methods, our algorithm has a distinct advantage in both the training speed and the robustness of unsupervised representation learning. Furthermore, the proposed method is easily extended to the supervised or unsupervised model for training deep structure networks.
["Unsupervised Learning", "Theory", "Deep learning"]
https://openreview.net/forum?id=SkYbF1slg
https://openreview.net/pdf?id=SkYbF1slg
https://openreview.net/forum?id=SkYbF1slg&noteId=r1hN6IU4l
Published as a conference paper at ICLR 2017ANINFORMATION -THEORETIC FRAMEWORK FORFAST AND ROBUST UNSUPERVISED LEARNING VIANEURAL POPULATION INFOMAXWentao Huang & Kechen ZhangDepartment of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimore, MD 21205, USAfwhuang21,kzhang4 g@jhmi.eduABSTRACTA framework is presented for unsupervised learning of representations based oninfomax principle for large-scale neural populations. We use an asymptotic ap-proximation to the Shannon’s mutual information for a large neural population todemonstrate that a good initial approximation to the global information-theoreticoptimum can be obtained by a hierarchical infomax method. Starting from theinitial solution, an efficient algorithm based on gradient descent of the final ob-jective function is proposed to learn representations from the input datasets, andthe method works for complete, overcomplete, and undercomplete bases. As con-firmed by numerical experiments, our method is robust and highly efficient forextracting salient features from input datasets. Compared with the main existingmethods, our algorithm has a distinct advantage in both the training speed and therobustness of unsupervised representation learning. Furthermore, the proposedmethod is easily extended to the supervised or unsupervised model for trainingdeep structure networks.1 I NTRODUCTIONHow to discover the unknown structures in data is a key task for machine learning. Learning goodrepresentations from observed data is important because a clearer description may help reveal theunderlying structures. Representation learning has drawn considerable attention in recent years(Bengio et al., 2013). One category of algorithms for unsupervised learning of representations isbased on probabilistic models (Lewicki & Sejnowski, 2000; Hinton & Salakhutdinov, 2006; Leeet al., 2008), such as maximum likelihood (ML) estimation, maximum a posteriori (MAP) probabil-ity estimation, and related methods. Another category of algorithms is based on reconstruction erroror generative criterion (Olshausen & Field, 1996; Aharon et al., 2006; Vincent et al., 2010; Mairalet al., 2010; Goodfellow et al., 2014), and the objective functions usually involve squared errors withadditional constraints. Sometimes the reconstruction error or generative criterion may also have aprobabilistic interpretation (Olshausen & Field, 1997; Vincent et al., 2010).Shannon’s information theory is a powerful tool for description of stochastic systems and couldbe utilized to provide a characterization for good representations (Vincent et al., 2010). However,computational difficulties associated with Shannon’s mutual information (MI) (Shannon, 1948) havehindered its wider applications. The Monte Carlo (MC) sampling (Yarrow et al., 2012) is a conver-gent method for estimating MI with arbitrary accuracy, but its computational inefficiency makes itunsuitable for difficult optimization problems especially in the cases of high-dimensional input stim-uli and large population networks. Bell and Sejnowski (Bell & Sejnowski, 1995; 1997) have directlyapplied the infomax approach (Linsker, 1988) to independent component analysis (ICA) of data withindependent non-Gaussian components assuming additive noise, but their method requires that thenumber of outputs be equal to the number of inputs. The extensions of ICA to overcomplete orundercomplete bases incur increased algorithm complexity and difficulty in learning of parameters(Lewicki & Sejnowski, 2000; Kreutz-Delgado et al., 2003; Karklin & Simoncelli, 2011).1Published as a conference paper at ICLR 2017Since Shannon MI is closely related to ML and MAP (Huang & Zhang, 2017), the algorithms ofrepresentation learning based on probabilistic models should be amenable to information-theoretictreatment. Representation learning based on reconstruction error could be accommodated also byinformation theory, because the inverse of Fisher information (FI) is the Cram ́er-Rao lower boundon the mean square decoding error of any unbiased decoder (Rao, 1945). Hence minimizing thereconstruction error potentially maximizes a lower bound on the MI (Vincent et al., 2010).Related problems arise also in neuroscience. It has long been suggested that the real nervous sys-tems might approach an information-theoretic optimum for neural coding and computation (Barlow,1961; Atick, 1992; Borst & Theunissen, 1999). However, in the cerebral cortex, the number of neu-rons is huge, with about 105neurons under a square millimeter of cortical surface (Carlo & Stevens,2013). It has often been computationally intractable to precisely characterize information codingand processing in large neural populations.To address all these issues, we present a framework for unsupervised learning of representationsin a large-scale nonlinear feedforward model based on infomax principle with realistic biologicalconstraints such as neuron models with Poisson spikes. First we adopt an objective function basedon an asymptotic formula in the large population limit for the MI between the stimuli and the neuralpopulation responses (Huang & Zhang, 2017). Since the objective function is usually nonconvex,choosing a good initial value is very important for its optimization. Starting from an initial value, weuse a hierarchical infomax approach to quickly find a tentative global optimal solution for each layerby analytic methods. Finally, a fast convergence learning rule is used for optimizing the final objec-tive function based on the tentative optimal solution. Our algorithm is robust and can learn complete,overcomplete or undercomplete basis vectors quickly from different datasets. Experimental resultsshowed that the convergence rate of our method was significantly faster than other existing methods,often by an order of magnitude. More importantly, the number of output units processed by ourmethod can be very large, much larger than the number of inputs. As far as we know, no existingmodel can easily deal with this situation.2 M ETHODS2.1 A PPROXIMATION OF MUTUAL INFORMATION FOR NEURAL POPULATIONSSuppose the input xis aK-dimensional vector, x= (x1;;xK)T, the outputs of Nneurons aredenoted by a vector, r= (r1;;rN)T, where we assume Nis large, generally NK. Wedenote random variables by upper case letters, e.g., random variables XandR, in contrast to theirvector values xandr. The MI between XandRis defined by I(X;R) =Dlnp(xjr)p(x)Er;x, wherehir;xdenotes the expectation with respect to the probability density function (PDF) p(r;x).Our goal is to maxmize MI I(X;R)by finding the optimal PDF p(rjx)under some constraintconditions, assuming that p(rjx)is characterized by a noise model and activation functions f(x;n)with parameters nfor then-th neuron (n= 1;;N). In other words, we optimize p(rjx)bysolving for the optimal parameters n. Unfortunately, it is intractable in most cases to solve for theoptimal parameters that maximizes I(X;R). However, if p(x)andp(rjx)are twice continuouslydifferentiable for almost every x2RK, then for large Nwe can use an asymptotic formula toapproximate the true value of I(X;R)with high accuracy (Huang & Zhang, 2017):I(X;R)'IG=12lndetG(x)2ex+H(X), (1)where det ()denotes the matrix determinant and H(X) =hlnp(x)ixis the stimulus entropy,G(x) =J(x) +P(x), (2)J(x) =@2lnp(rjx)@x@xTrjx, (3)P(x) =@2lnp(x)@x@xT. (4)Assuming independent noises in neuronal responses, we have p(rjx) =QNn=1p(rnjx;n),and the Fisher information matrix becomes J(x)NPK1k=1kS(x;k), where S(x;k) =2Published as a conference paper at ICLR 2017D@lnp(rjx;k)@x@lnp(rjx;k)@xTErjxandk>0(k= 1;;K1) is the population density of param-eterk, withPK1k=1k= 1, and 1K1N(see Appendix A.1 for details). Since the cerebralcortex usually forms functional column structures and each column is composed of neurons with thesame properties (Hubel & Wiesel, 1962), the positive integer K1can be regarded as the number ofdistinct classes in the neural population.Therefore, given the activation function f(x;k), our goal becomes to find the optimal popula-tion distribution density kof parameter vector kso that the MI between the stimulus xand theresponse ris maximized. By Eq. (1), our optimization problem can be stated as follows:minimizeQG[fkg] =12hln (det ( G(x)))ix, (5)subject toK1Xk=1k= 1,k>0,8k= 1;;K1. (6)SinceQG[fkg]is a convex function of fkg(Huang & Zhang, 2017), we can readily find theoptimal solution for small Kby efficient numerical methods. For large K, however, finding anoptimal solution by numerical methods becomes intractable. In the following we will propose analternative approach to this problem. Instead of directly solving for the density distribution fkg, weoptimize the parameters fkgandfkgsimultaneously under a hierarchical infomax framework.2.2 H IERARCHICAL INFOMAXFor clarity, we consider neuron model with Poisson spikes although our method is easily applicableto other noise models. The activation function f(x;n)is generally a nonlinear function, such assigmoid and rectified linear unit (ReLU) (Nair & Hinton, 2010). We assume that the nonlinearfunction for the n-th neuron has the following form: f(x;n) =~f(yn;~n), whereyn=wTnx. (7)withwnbeing aK-dimensional weights vector, ~f(yn;~n)is a nonlinear function, n= (wTn;~Tn)Tand~nare the parameter vectors ( n= 1;;N).In general, it is very difficult to find the optimal parameters, n,n= 1;;N, for the followingreasons. First, the number of output neurons Nis very large, usually NK. Second, the activationfunctionf(x;n)is a nonlinear function, which usually leads to a nonconvex optimization problem.For nonconvex optimization problems, the selection of initial values often has a great influence onthe final optimization results. Our approach meets these challenges by making better use of the largenumber of neurons and by finding good initial values by a hierarchical infomax method.We divide the nonlinear transformation into two stages, mapping first from xtoyn(n= 1;;N),and then from ynto~f(yn;~n), whereyncan be regarded as the membrane potential of the n-thneuron, and ~f(yn;~n)as its firing rate. As with the real neurons, we assume that the membranepotential is corrupted by noise:Yn=Yn+Zn, (8)whereZn N0,2is a normal distribution with mean 0and variance 2. Then the meanmembrane potential of the k-th class subpopulation with Nk=Nkneurons is given byYk=1NkNkXn=1Ykn=Yk+Zk,k= 1;;K1, (9)ZkN(0; N1k2). (10)Define vectors y= (y1;;yN)T, y= (y1;;yK1)Tandy= (y1;;yK1)T, whereyk=wTkx(k= 1;;K1). Notice that yn(n= 1;;N) is also divided into K1classes, the sameas forrn. If we assume f(x;k) = ~f(yk;~k), i.e. assuming an additive Gaussian noise for yn(see Eq. 9), then the random variables X,Y,Y,YandRform a Markov chain, denoted byX!Y!Y!Y!R(see Figure 1), and we have the following proposition (see AppendixA.2).3Published as a conference paper at ICLR 2017X Y R Y YW X Y + Z( T1/N k f( )Yxxxy-y-y-ym1(ymNk(ymNkK1rmNkymi(rN1 yN1(yN(rN yNyni(yn1(yn1yniymiym1yN1ri yiy1yi(y1(rnirn1rmirm1r11kKk1Figure 1: A neural network interpretaton for random variables X,Y,Y,Y,R.Proposition 1. With the random variables X,Y,Y,Y,Rand Markov chain X!Y!Y!Y!R, the following equations hold,I(X;R) =I(Y;R)I(Y;R)I(Y;R), (11)I(X;R)I(X;Y) =I(X;Y)I(X;Y), (12)and for large Nk(k= 1;;K1),I(Y;R)'I(Y;R)'I(Y;R) =I(X;R), (13)I(X;Y)'I(X;Y) =I(X;Y). (14)A major advantage of incorporating membrane noise is that it facilitates finding the optimal solutionby using the infomax principle. Moreover, the optimal solution obtained this way is more robust;that is, it discourages overfitting and has a strong ability to resist distortion. With vanishing noise2!0, we have Yk!Yk,~f(yk;~k)'~f(yk;~k) =f(x;k), so that Eqs. (13) and (14) hold asin the case of large Nk.To optimize MI I(Y;R), the probability distribution of random variable Y,p(y), needs to be de-termined, i.e. maximizing I(Y;R)aboutp(y)under some constraints should yield an optimaldistribution: p(y) = arg max p(y)I(Y;R). LetC= maxp(y)I(Y;R)be the channel capacity ofneural population coding, and we always have I(X;R)C (Huang & Zhang, 2017). To find asuitable linear transformation from XtoYthat is compatible with this distribution p(y), a reason-able choice is to maximize I(X;Y) (I(X;Y)), where Yis a noise-corrupted version of Y. Thisimplies minimum information loss in the first transformation step. However, there may exist manytransformations from XtoYthat maximize I(X;Y)(see Appendix A.3.1). Ideally, if we can finda transformation that maximizes both I(X;Y)andI(Y;R)simultaneously, then I(X;R)reachesits maximum value: I(X;R) = maxp(y)I(Y;R) =C.From the discussion above we see that maximizing I(X;R)can be divided into two steps,namely, maximizing I(X;Y)and maximizing I(Y;R). The optimal solutions of maxI(X;Y)andmaxI(Y;R)will provide a good initial approximation that tend to be very close to the optimalsolution of maxI(X;R).Similarly, we can extend this method to multilayer neural population networks. For example, a two-layer network with outputs R(1)andR(2)form a Markov chain, X!~R(1)!R(1)!R(1)!4Published as a conference paper at ICLR 2017R(2), where random variable ~R(1)is similar to Y, random variable R(1)is similar to Y, and R(1)is similar to Yin the above. Then we can show that the optimal solution of maxI(X;R(2))canbe approximated by the solutions of maxI(X;R(1))andmaxI(~R(1);R(2)), withI(~R(1);R(2))'I(R(1);R(2)).More generally, consider a highly nonlinear feedforward neural network that maps the input xtooutput z, with z=F(x;) =hLh1(x), wherehl(l= 1;;L) is a linear or nonlinearfunction (Montufar et al., 2014). We aim to find the optimal parameter by maximizing I(X;Z). Itis usually difficult to solve the optimization problem when there are many local extrema for F(x;).However, if each function hlis easy to optimize, then we can use the hierarchical infomax methoddescribed above to get a good initial approximation to its global optimization solution, and go fromthere to find the final optimal solution. This information-theoretic consideration from the neuralpopulation coding point of view may help explain why deep structure networks with unsupervisedpre-training have a powerful ability for learning representations.2.3 T HEOBJECTIVE FUNCTIONThe optimization processes for maximizing I(X;Y)and maximizing I(Y;R)are discussed in detailin Appendix A.3. First, by maximizing I(X;Y)(see Appendix A.3.1 for details), we can get theoptimal weight parameter wk(k= 1;;K1, see Eq. 7) and its population density k(see Eq. 6)which satisfyW= [w1;;wK1] =aU01=20C, (15)1==K1=K11, (16)wherea=qK1K10,C= [c1;;cK1]2RK0K1,CCT=IK0,IK0is aK0K0identitymatrix with integer K02[1;K], the diagonal matrix 02RK0K0and matrix U02RKK0aregiven in (A.44) and (A.45), with K0given by Eq. (A.52). Matrices 0andU0can be obtainedbyandUwithUT0U0=IK0andU00UT0UUTxxTx(see Eq. A.23). Theoptimal weight parameter wk(15) means that the input variable xmust first undergo a whitening-like transformation ^ x=1=20UT0x, and then goes through the transformation y=aCT^ x, withmatrix Cto be optimized below. Note that weight matrix Wsatisfies rank(W) = min(K0;K1),which is a low rank matrix, and its low dimensionality helps reduce overfitting during training (seeAppendix A.3.1).By maximizing I(Y;R)(see Appendix A.3.2), we further solve the the optimal parameters ~kforthe nonlinear functions ~f(yk;~k),k= 1;;K1. Finally, the objective function for our optimiza-tion problem (Eqs. 5 and 6) turns into (see Appendix A.3.3 for details):minimizeQ[C] =12DlndetC^CTE^ x, (17)subject to CCT=IK0, (18)where ^= diag(^y1)2;;(^yK1)2,(^yk) =a1j@gk(^yk)=@^ykj(k= 1;;K1),gk(^yk) =2q~f(^yk;~k),^yk=a1yk=cTk^ x, and^ x=1=20UT0x. We apply the gradient descent method tooptimize the objective function, with the gradient of Q[C]given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (19)where!= (!1;;!K1)T,!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1.WhenK0=K1(orK0> K 1), the objective function Q[C]can be reduced to a simpler form,and its gradient is also easy to compute (see Appendix A.4.1). However, when K0< K 1, it iscomputationally expensive to update Cby applying the gradient of Q[C]directly, since it requiresmatrix inversion for every ^ x. We use another objective function ^Q[C](see Eq. A.118) which is anapproximation to Q[C], but its gradient is easier to compute (see Appendix A.4.2). The function5Published as a conference paper at ICLR 2017^Q[C]is the approximation of Q[C], ideally they have the same optimal solution for the parameterC.Usually, for optimizing the objective in Eq. 17, the orthogonality constraint (Eq. 18) is unnecessary.However, this orthogonality constraint can accelerate the convergence rate if we employ it for theinitial iteration to update C(see Appendix A.5).3 E XPERIMENTAL RESULTSWe have applied our methods to the natural images from Olshausen’s image dataset (Olshausen &Field, 1996) and the images of handwritten digits from MNIST dataset (LeCun et al., 1998) usingMatlab 2016a on a computer with 12 Intel CPU cores (2.4 GHz). The gray level of each raw imagewas normalized to the range of 0to1.Mimage patches with size ww=Kfor training wererandomly sampled from the images. We used the Poisson neuron model with a modified sigmoidaltuning function ~f(y;~) =14(1+exp(yb))2, withg(y) = 2q~f(y;~) =11+exp(yb), where~= (;b)T. We obtained the initial values (see Appendix A.3.2): b0= 0and01:81qK1K10.For our experiments, we set = 0:50for iteration epoch t= 1;;t0and=0fort=t0+ 1;;tmax, wheret0= 50 .Firstly, we tested the case of K=K0=K1= 144 and randomly sampled M= 105image patcheswith size 1212from the Olshausen’s natural images, assuming that N= 106neurons were dividedintoK1= 144 classes and= 1(see Eq. A.52 in Appendix). The input patches were preprocessedby the ZCA whitening filters (see Eq. A.68). To test our algorithms, we chose the batch size to beequal to the number of training samples M, although we could also choose a smaller batch size. Weupdated the matrix Cfrom a random start, and set parameters tmax= 300 ,v1= 0:4, and= 0:8for all experiments.In this case, the optimal solution Clooked similar to the optimal solution of IICA (Bell & Sejnowski,1997). We also compared with the fast ICA algorithm (FICA) (Hyv ̈arinen, 1999), which is fasterthan IICA. We also tested the restricted Boltzmann machine (RBM) (Hinton et al., 2006) for aunsupervised learning of representations, and found that it could not easily learn Gabor-like filtersfrom Olshausen’s image dataset as trained by contrastive divergence. However, an improved methodby adding a sparsity constraint on the output units, e.g., sparse RBM (SRBM) (Lee et al., 2008) orsparse autoencoder (Hinton, 2010), could attain Gabor-like filters from this dataset. Similar resultswith Gabor-like filters were also reproduced by the denoising autoencoders (Vincent et al., 2010),which method requires a careful choice of parameters, such as noise level, learning rate, and batchsize.In order to compare our methods, i.e. Algorithm 1 (Alg.1, see Appendix A.4.1) and Algorithm2 (Alg.2, see Appendix A.4.2), with other methods, i.e. IICA, FICA and SRBM, we implementedthese algorithms using the same initial weights and the same training data set (i.e. 105image patchespreprocessed by the ZCA whitening filters). To get a good result by IICA, we must carefully selectthe parameters; we set the batch size as 50, the initial learning rate as 0:01, and final learning rateas0:0001 , with an exponential decay with the epoch of iterations. IICA tends to have a fasterconvergence rate for a bigger batch size but it may become harder to escape local minima. ForFICA, we chose the nonlinearity function f(u) = log cosh( u)as contrast function, and for SRBM,we set the sparseness control constant pas0:01and0:03. The number of epoches for iterations wasset to 300for all algorithms. Figure 2 shows the filters learned by our methods and other methods.Each filter in Figure 2(a) corresponds to a column vector of matrix C(see Eq. A.69), where eachvector for display is normalized by ck ck=max(jc1;kj;;jcK;kj),k= 1;;K1. The resultsin Figures 2(a), 2(b) and 2(c) look very similar to one another, and slightly different from the resultsin Figure 2(d) and 2(e). There are no Gabor-like filters in Figure 2(f), which corresponds to SRBMwithp= 0:03.Figure 3 shows how the coefficient entropy (CFE) (see Eq. A.122) and the conditional entropy(CDE) (see Eq. A.125) varied with training time. We calculated CFE and CDE by sampling onceevery 10epoches from a total of 300epoches. These results show that our algorithms had a fastconvergence rate towards stable solutions while having CFE and CDE values similar to the algorithmof IICA, which converged much more slowly. Here the values of CFE and CDE should be as small6Published as a conference paper at ICLR 2017(a) (b) (c)(d) (e) (f)Figure 2: Comparison of filters obtained from 105natural image patches of size 12 12 by ourmethods (Alg.1 and Alg.2) and other methods. The number of output filters was K1= 144 . (a):Alg.1. ( b): Alg.2. ( c): IICA. ( d): FICA. ( e): SRBM (p= 0:01). (f): SRBM (p= 0:03).100101102time (seconds)1.81.851.91.952coefficient entropy (bits)Alg.1Alg.2IICAFICASRBM (p = 0.01)SRBM (p = 0.03)(a)100101102time (seconds)-400-350-300-250-200-150conditional entropy (bits)Alg.1Alg.2IICA (b)100101102time (seconds)-200-1000100200300conditional entropy (bits)SRBM (p = 0.01)SRBM (p = 0.03)SRBM (p = 0.05)SRBM (p = 0.10) (c)Figure 3: Comparison of quantization effects and convergence rate by coefficient entropy (seeA.122) and conditional entropy (see A.125) corresponding to training results (filters) shown in Fig-ure 2. The coefficient entropy (panel a) and conditional entropy (panel bandc) are shown as afunction of training time on a logarithmic scale. All experiments run on the same machine usingMatlab. Here we sampled once every 10epoches out of a total of 300epoches. We set epoch numbert0= 50 for Alg.1 and Alg.2 and the start time to 1second.as possible for a good representation learned from the same data set. Here we set epoch numbert0= 50 in our algorithms (see Alg.1 and Alg.2), and the start time was set to 1second. Thisexplains the step seen in Figure 3 (b) for Alg.1 and Alg.2 since the parameter was updated whenepoch number t=t0. FICA had a convergence rate close to our algorithms but had a big CFE,which is reflected by the quality of the filter results in Figure 2. The convergence rate and CFE forSRBM were close to IICA, but SRBM had a much bigger CDE than IICA, which implies that theinformation had a greater loss when passing through the system optimized by SRBM than by IICAor our methods.7Published as a conference paper at ICLR 2017From Figure 3(c) we see that the CDE (or MI I(X;R), see Eq. A.124 and A.125) decreases (orincreases) with the increase of the value of the sparseness control constant p. Note that a smallerpmeans sparser outputs. Hence, in this sense, increasing sparsity may result in sacrificing someinformation. On the other hand, a weak sparsity constraint may lead to failure of learning Gabor-like filters (see Figure 2(f)), and increasing sparsity has an advantage in reducing the impact ofnoise in many practical cases. Similar situation also occurs in sparse coding (Olshausen & Field,1997), which provides a class of algorithms for learning overcomplete dictionary representations ofthe input signals. However, its training is time consuming due to its expensive computational cost,although many new training algorithms have emerged (e.g. Aharon et al., 2006; Elad & Aharon,2006; Lee et al., 2006; Mairal et al., 2010). See Appendix A.5 for additional experimental results.4 C ONCLUSIONSIn this paper, we have presented a framework for unsupervised learning of representations via in-formation maximization for neural populations. Information theory is a powerful tool for machinelearning and it also provides a benchmark of optimization principle for neural information pro-cessing in nervous systems. Our framework is based on an asymptotic approximation to MI for alarge-scale neural population. To optimize the infomax objective, we first use hierarchical infomaxto obtain a good approximation to the global optimal solution. Analytical solutions of the hierarchi-cal infomax are further improved by a fast convergence algorithm based on gradient descent. Thismethod allows us to optimize highly nonlinear neural networks via hierarchical optimization usinginfomax principle.From the viewpoint of information theory, the unsupervised pre-training for deep learning (Hinton &Salakhutdinov, 2006; Bengio et al., 2007) may be reinterpreted as a process of hierarchical infomax,which might help explain why unsupervised pre-training helps deep learning (Erhan et al., 2010). Inour framework, a pre-whitening step can emerge naturally by the hierarchical infomax, which mightalso explain why a pre-whitening step is useful for training in many learning algorithms (Coateset al., 2011; Bengio, 2012).Our model naturally incorporates a considerable degree of biological realism. It allows the opti-mization of a large-scale neural population with noisy spiking neurons while taking into account ofmultiple biological constraints, such as membrane noise, limited energy, and bounded connectionweights. We employ a technique to attain a low-rank weight matrix for optimization, so as to reducethe influence of noise and discourage overfitting during training. In our model, many parametersare optimized, including the population density of parameters, filter weight vectors, and parametersfor nonlinear tuning functions. Optimizing all these model parameters could not be easily done bymany other methods.Our experimental results suggest that our method for unsupervised learning of representations hasobvious advantages in its training speed and robustness over the main existing methods. Our modelhas a nonlinear feedforward structure and is convenient for fast learning and inference. This simpleand flexible framework for unsupervised learning of presentations should be readily extended totraining deep structure networks. In future work, it would interesting to use our method to train deepstructure networks with either unsupervised or supervised learning.ACKNOWLEDGMENTSWe thank Prof. Honglak Lee for sharing Matlab code for algorithm comparison, Prof. Shan Tan fordiscussions and comments and Kai Liu for helping draw Figure 1. Supported by grant NIH-NIDCDR01 DC013698.REFERENCESAharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcompletedictionaries for sparse representation. Signal Processing, IEEE Transactions on , 54(11), 4311–4322.8Published as a conference paper at ICLR 2017Amari, S. (1999). Natural gradient learning for over- and under-complete bases in ica. NeuralComput. , 11(8), 1875–1883.Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing?Network: Comp. Neural. , 3(2), 213–251.Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. Sen-sory Communication , (pp. 217–234).Bell, A. J. & Sejnowski, T. J. (1995). An information-maximization approach to blind separationand blind deconvolution. Neural Comput. , 7(6), 1129–1159.Bell, A. J. & Sejnowski, T. J. (1997). The ”independent components” of natural scenes are edgefilters. Vision Res. , 37(23), 3327–3338.Bengio, Y . (2012). Deep learning of representations for unsupervised and transfer learning. Unsu-pervised and Transfer Learning Challenges in Machine Learning , 7, 19.Bengio, Y ., Courville, A., & Vincent, P. (2013). Representation learning: A review and new per-spectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 35(8), 1798–1828.Bengio, Y ., Lamblin, P., Popovici, D., Larochelle, H., et al. (2007). Greedy layer-wise training ofdeep networks. Advances in neural information processing systems , 19, 153.Borst, A. & Theunissen, F. E. (1999). Information theory and neural coding. Nature neuroscience ,2(11), 947–957.Carlo, C. N. & Stevens, C. F. (2013). Structural uniformity of neocortex, revisited. Proceedings ofthe National Academy of Sciences , 110(4), 1488–1493.Coates, A., Ng, A. Y ., & Lee, H. (2011). An analysis of single-layer networks in unsupervisedfeature learning. In International conference on artificial intelligence and statistics (pp. 215–223).Cortes, C. & Vapnik, V . (1995). Support-vector networks. Machine learning , 20(3), 273–297.Cover, T. M. & Thomas, J. A. (2006). Elements of Information, 2nd Edition . New York: Wiley-Interscience.Edelman, A., Arias, T. A., & Smith, S. T. (1998). The geometry of algorithms with orthogonalityconstraints. SIAM J. Matrix Anal. Appl. , 20(2), 303–353.Elad, M. & Aharon, M. (2006). Image denoising via sparse and redundant representations overlearned dictionaries. Image Processing, IEEE Transactions on , 15(12), 3736–3745.Erhan, D., Bengio, Y ., Courville, A., Manzagol, P.-A., Vincent, P., & Bengio, S. (2010). Why doesunsupervised pre-training help deep learning? The Journal of Machine Learning Research , 11,625–660.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., &Bengio, Y . (2014). Generative adversarial nets. In Advances in Neural Information ProcessingSystems (pp. 2672–2680).Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Momentum , 9(1),926.Hinton, G., Osindero, S., & Teh, Y .-W. (2006). A fast learning algorithm for deep belief nets. Neuralcomputation , 18(7), 1527–1554.Hinton, G. E. & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neuralnetworks. Science , 313(5786), 504–507.Huang, W. & Zhang, K. (2017). Information-theoretic bounds and approximations in neural popu-lation coding. Neural Comput, submitted, URL https://arxiv.org/abs/1611.01414 .9Published as a conference paper at ICLR 2017Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional archi-tecture in the cat’s visual cortex. The Journal of physiology , 160(1), 106–154.Hyv ̈arinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis.Neural Networks, IEEE Transactions on , 10(3), 626–634.Karklin, Y . & Simoncelli, E. P. (2011). Efficient coding of natural images with a population of noisylinear-nonlinear neurons. In Advances in neural information processing systems , volume 24 (pp.999–1007).Konstantinides, K. & Yao, K. (1988). Statistical analysis of effective singular values in matrix rankdetermination. Acoustics, Speech and Signal Processing, IEEE Transactions on , 36(5), 757–763.Kreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T. S., & Sejnowski, T. J. (2003).Dictionary learning algorithms for sparse representation. Neural computation , 15(2), 349–396.LeCun, Y ., Bottou, L., Bengio, Y ., & Haffner, P. (1998). Gradient-based learning applied to docu-ment recognition. Proceedings of the IEEE , 86(11), 2278–2324.Lee, H., Battle, A., Raina, R., & Ng, A. Y . (2006). Efficient sparse coding algorithms. In Advancesin neural information processing systems (pp. 801–808).Lee, H., Ekanadham, C., & Ng, A. Y . (2008). Sparse deep belief net model for visual area v2. InAdvances in neural information processing systems (pp. 873–880).Lewicki, M. S. & Olshausen, B. A. (1999). Probabilistic framework for the adaptation and compar-ison of image codes. JOSA A , 16(7), 1587–1601.Lewicki, M. S. & Sejnowski, T. J. (2000). Learning overcomplete representations. Neural compu-tation , 12(2), 337–365.Linsker, R. (1988). Self-Organization in a perceptual network. Computer , 21(3), 105–117.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2009). Online dictionary learning for sparse coding.InProceedings of the 26th annual international conference on machine learning (pp. 689–696).:ACM.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization andsparse coding. The Journal of Machine Learning Research , 11, 19–60.Montufar, G. F., Pascanu, R., Cho, K., & Bengio, Y . (2014). On the number of linear regions of deepneural networks. In Advances in Neural Information Processing Systems (pp. 2924–2932).Nair, V . & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807–814).Olshausen, B. A. & Field, D. J. (1996). Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature , 381(6583), 607–609.Olshausen, B. A. & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision Res. , 37(23), 3311–3325.Rao, C. R. (1945). Information and accuracy attainable in the estimation of statistical parameters.Bulletin of the Calcutta Mathematical Society , 37(3), 81–91.Shannon, C. (1948). A mathematical theory of communications. Bell System Technical Journal , 27,379–423 and 623–656.Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout:A simple way to prevent neural networks from overfitting. The Journal of Machine LearningResearch , 15(1), 1929–1958.10Published as a conference paper at ICLR 2017Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y ., & Manzagol, P.-A. (2010). Stacked denoisingautoencoders: Learning useful representations in a deep network with a local denoising criterion.The Journal of Machine Learning Research , 11, 3371–3408.Yarrow, S., Challis, E., & Series, P. (2012). Fisher and shannon information in finite neural popula-tions. Neural computation , 24(7), 1740–1780.APPENDIXA.1 F ORMULAS FOR APPROXIMATION OF MUTUAL INFORMATIONIt follows from I(X;R) =Dlnp(xjr)p(x)Er;xand Eq. (1) that the conditional entropy should read:H(XjR) =hlnp(xjr)ir;x'12lndetG(x)2ex. (A.1)The Fisher information matrix J(x)(see Eq. 3), which is symmetric and positive semidefinite, canbe written also asJ(x) =@lnp(rjx)@x@lnp(rjx)@xTrjx. (A.2)If we suppose p(rjx)is conditional independent, namely, p(rjx) =QNn=1p(rnjx;n), then wehave (see Huang & Zhang, 2017)J(x) =NZp()S(x;)d, (A.3)S(x;) =@lnp(rjx;)@x@lnp(rjx;)@xTrjx, (A.4)wherep()is the population density function of parameter ,p() =1NNXn=1(n), (A.5)and()denotes the Dirac delta function. It can be proved that the approximation function of MIIG[p()](Eq. 1) is concave about p()(Huang & Zhang, 2017). In Eq. (A.3), we can approximatethe continuous integral by a discrete summation for numerical computation,J(x)NK1Xk=1kS(x;k), (A.6)wherePK1k=1k= 1,k>0,k= 1;;K1,1K1N.For Poisson neuron model, by Eq. (A.4) we have (see Huang & Zhang, 2017)p(rjx;) =f(x;)rr!exp (f(x;)), (A.7)S(x;) =1f(x;)@f(x;)@x@f(x;)@xT=@g(x;)@x@g(x;)@xT, (A.8)wheref(x;)0is the activation function (mean response) of neuron andg(x;) = 2pf(x;). (A.9)11Published as a conference paper at ICLR 2017Similarly, for Gaussian noise model, we havep(rjx;) =1p2exp (rf(x;))222!, (A.10)S(x;) =12@f(x;)@x@f(x;)@xT, (A.11)where>0denotes the standard deviation of noise.Sometimes we do not know the specific form of p(x)and only know Msamples, x1,,xM,which are independent and identically distributed (i.i.d.) samples drawn from the distribution p(x).Then we can use the empirical average to approximate the integral in Eq. (1):IG12MXm=1ln (det ( G(xm))) +H(X). (A.12)A.2 P ROOF OF PROPOSITION 1Proof. It follows from the data-processing inequality (Cover & Thomas, 2006) thatI(X;R)I(Y;R)I(Y;R)I(Y;R), (A.13)I(X;R)I(X;Y)I(X;Y)I(X;Y). (A.14)Sincep(ykjx) =p(yk1;;ykNkjx) =N(wTkx; N1k2),k= 1;;K1, (A.15)we havep( yjx) =p( yjx), (A.16)p( y) =p( y), (A.17)I(X;Y) =I(X;Y). (A.18)Hence, by (A.14) and (A.18), expression (12) holds.On the other hand, when Nkis large, from Eq. (10) we know that the distribution of Zk, namely,N0,N1k2, approaches a Dirac delta function (zk). Then by (7) and (9) we have p(rj y)'p(rjy) =p(rjx)andI(X;R) =I(Y;R)lnp(rjy)p(rjx)r;x=I(Y;R), (A.19)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.20)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.21)I(X;Y) =I(X;Y)lnp(xj y)p(xjy)x;y; y'I(X;Y). (A.22)It follows from (A.13) and (A.19) that (11) holds. Combining (11), (12) and (A.20)–(A.22), weimmediately get (13) and (14). This completes the proof of Proposition 1 . A.3 H IERARCHICAL OPTIMIZATION FOR MAXIMIZING I(X;R)In the following, we will discuss the optimization procedure for maximizing I(X;R)in two stages:maximizing I(X;Y)and maximizing I(Y;R).12Published as a conference paper at ICLR 2017A.3.1 T HE1STSTAGEIn the first stage, our goal is to maximize the MI I(X;Y)and get the optimal parameters wk(k= 1;;K1). Assume that the stimulus xhas zero mean (if not, let x xhxix) andcovariance matrix x. It follows from eigendecomposition thatx=xxTx1M1XXT=UUT, (A.23)where X= [x1,,xM],U= [u1;;uK]2RKKis an unitary orthogonal matrix and =diag21;;2Kis a positive diagonal matrix with 1K>0. Define~ x=1=2UTx, (A.24)~ wk=1=2UTwk, (A.25)yk=~ wTk~ x, (A.26)wherek= 1;;K1. The covariance matrix of ~ xis given by~ x=D~ x~ xTE~ xIK, (A.27)andIKis aKKidentity matrix. From (1) and (A.11) we have I(X;Y) =I(~X;Y)andI(~X;Y)'I0G=12ln det ~G2e!!+H(~X), (A.28)~GN2K1Xk=1k~ wk~ wTk+IK. (A.29)The following approximations are useful (see Huang & Zhang, 2017):p(~ x)N (0;IK), (A.30)P(~ x) =@2lnp(~ x)@~ x@~ xTIK. (A.31)By the central limit theorem, the distribution of random variable ~Xis closer to a normal distribu-tion than the distribution of the original random variable X. On the other hand, the PCA modelsassume multivariate gaussian data whereas the ICA models assume multivariate non-gaussian data.Hence by a PCA-like whitening transformation (A.24) we can use the approximation (A.31) withthe Laplace’s method of asymptotic expansion, which only requires that the peak be close to itsmean while random variable ~Xneeds not be exactly Gaussian.Without any constraints on the Gaussian channel of neural populations, especially the peak firingrates, the capacity of this channel may grow indefinitely: I(~X;Y)! 1 . The most commonconstraint on the neural populations is an energy or power constraint which can also be regarded asa signal-to-noise ratio (SNR) constraint. The SNR for the output ynof the n-th neuron is given bySNRn=12DwTnx2Ex12~ wTn~ wn,n= 1;;N. (A.32)We require that1NNXn=1SNRn12K1Xk=1k~ wTk~ wk, (A.33)whereis a positive constant. Then by Eq. (A.28), (A.29) and (A.33), we have the followingoptimization problem:minimizeQ0G[^W] =12lndetN2^W^WT+IK, (A.34)subject toh= Tr^W^WTE0, (A.35)13Published as a conference paper at ICLR 2017where Tr ()denotes matrix trace and^W=~WA1=2=1=2UTWA1=2= [^ w1;;^ wK1], (A.36)A= diag (1;;K1), (A.37)W= [w1;;wK1], (A.38)~W= [~ w1;;~ wK1], (A.39)E=2. (A.40)HereEis a constant that does not affect the final optimal solution so we set E= 1. Then we obtainan optimal solution as follows:W=aU01=20VT0, (A.41)A=K11IK1, (A.42)a=qEK1K10=qK1K10, (A.43)0= diag21;;2K0, (A.44)U0=U(:;1:K0)2RKK0, (A.45)V0=V(:;1:K0)2RK1K0, (A.46)where V= [v1;;vK1]is anK1K1unitary orthogonal matrix, parameter K0represents thesize of the reduced dimension ( 1K0K), and its value will be determined below. Now theoptimal parameters wn(n= 1;;N) are clustered into K1classes (see Eq. A.6) and obey anuniform discrete distribution (see also Eq. A.60 in Appendix A.3.2).WhenK=K0=K1, the optimal solution of Win Eq. (A.41) is a whitening-like filter. WhenV=IK, the optimal matrix Wis the principal component analysis (PCA) whitening filters. In thesymmetrical case with V=U, the optimal matrix Wbecomes a zero component analysis (ZCA)whitening filter. If K <K 1, this case leads to an overcomplete solution, whereas when K0K1<K, the undercomplete solution arises. Since K0K1andK0K,Q0Gachieves its minimumwhenK0=K. However, in practice other factors may prevent it from reaching this minimum. Forexample, consider the average of squared weights,&=K1Xk=1kkwkk2= TrWAWT=EK0K0Xk=12k, (A.47)wherekkdenotes the Frobenius norm. The value of &is extremely large when any kbecomesvanishingly small. For real neurons these weights of connection are not allowed to be too large.Hence we impose a limitation on the weights: &E1, whereE1is a positive constant. This yieldsanother constraint on the objective function,~h=EK0K0Xk=12kE10. (A.48)From (A.35) and (A.48) we get the optimal K0= arg max ~K0E~K10P~K0k=12k. By this con-straint, small values of 2kwill often result in K0<K and a low-rank matrix W(Eq. A.41).On the other hand, the low-rank matrix Wcan filter out the noise of stimulus x. Consider thetransformation Y=WTXwithX= [x1,,xM]andY= [y1,,yM]forMsamples. Itfollows from the singular value decomposition (SVD) of XthatX=US~VT, (A.49)where Uis given in (A.23), ~Vis aMMunitary orthogonal matrix, Sis aKMdiagonal matrixwith non-negative real numbers on the diagonal, Sk;k=pM1k(k= 1;;K,KM), andSST= (M1). LetX=pM1U01=20~VT0X, (A.50)14Published as a conference paper at ICLR 2017where ~V0=~V(:;1:K0)2RMK0,0andU0are given in (A.44) and (A.45), respectively. ThenY=WTX=aV01=20UT0US~VT=WTX=apM1V0~VT0, (A.51)where Xcan be regarded as a denoised version of X. The determination of the effective rankK0Kof the matrix Xby using singular values is based on various criteria (Konstantinides &Yao, 1988). Here we choose K0as follows:K0= arg minK000@vuutPK00k=12kPKk=12k1A, (A.52)whereis a positive constant ( 0<1).Another advantage of a low-rank matrix Wis that it can significantly reduce overfitting for learningneural population parameters. In practice, the constraint (A.47) is equivalent to a weight-decay reg-ularization term used in many other optimization problems (Cortes & Vapnik, 1995; Hinton, 2010),which can reduce overfitting to the training data. To prevent the neural networks from overfitting,Srivastava et al. (2014) presented a technique to randomly drop units from the neural network dur-ing training, which may in fact be regarded as an attempt to reduce the rank of the weight matrixbecause the dropout can result in a sparser weights (lower rank matrix). This means that the updateis only concerned with keeping the more important components, which is similar to first performinga denoising process by the SVD low rank approximation.In this stage, we have obtained the optimal parameter W(see A.41). The optimal value of matrixV0can also be determined, as shown in Appendix A.3.3.A.3.2 T HE2NDSTAGEFor this stage, our goal is to maximize the MI I(Y;R)and get the optimal parameters ~k,k= 1;;K1. Here the input is y= (y1;;yK1)Tand the output r= (r1;;rN)Tisalso clustered into K1classes. The responses of Nkneurons in the k-th subpopulation obey a Pois-son distribution with mean ~f(eTky;~k), where ekis a unit vector with 1in thek-th element andyk=eTky. By (A.24) and (A.26), we havehykiyk= 0, (A.53)2yk=y2kyk=k~ wkk2. (A.54)Then for large N, by (1)–(4) and (A.30) we can use the following approximation,I(Y;R)'IF=12*ln det J(y)2e!!+y+H(Y), (A.55)whereJ(y) = diagN1jg01(y1)j2;;NK1g0K1(yK1)2, (A.56)g0k(yk) =@gk(yk)@yk,k= 1;;K1, (A.57)gk(yk) = 2q~f(yk;~k),k= 1;;K1. (A.58)It is easy to get thatIF=12K1Xk=1*ln Nkjg0k(yk)j22e!+y+H(Y)12K1Xk=1*ln jg0k(yk)j22e!+yK12lnK1N+H(Y), (A.59)15Published as a conference paper at ICLR 2017where the equality holds if and only ifk=1K1;k= 1;;K1, (A.60)which is consistent with Eq. (A.42).On the other hand, it follows from the Jensen’s inequality thatIF=*ln0@p(y)1det J(y)2e!1=21A+ylnZdet J(y)2e!1=2dy, (A.61)where the equality holds if and only if p(y)1detJ(y)1=2is a constant, which implies thatp(y) =detJ(y)1=2RdetJ(y)1=2dy=QK1k=1jg0k(yk)jRQK1k=1jg0k(yk)jdy. (A.62)From (A.61) and (A.62), maximizing ~IFyieldsp(yk) =jg0k(yk)jRjg0k(yk)jdyk,k= 1;;K1. (A.63)We assume that (A.63) holds, at least approximately. Hence we can let the peak of g0k(yk)be atyk=hykiyk= 0andy2kyk=2yk=k~ wkk2. Then combining (A.57), (A.61) and (A.63) we findthe optimal parameters ~kfor the nonlinear functions ~f(yk;~k),k= 1;;K1.A.3.3 T HEFINAL OBJECTIVE FUNCTIONIn the preceding sections we have obtained the initial optimal solutions by maximizing IX;YandI(Y;R). In this section, we will discuss how to find the final optimal V0and other parametersby maximizing I(X;R)from the initial optimal solutions.First, we havey=~WT~ x=a^ y, (A.64)whereais given in (A.43) and^ y= (^y1;;^yK1)T=CT^ x=CT x, (A.65)^ x=1=20UT0x, (A.66)C=VT02RK0K1, (A.67) x=U01=20UT0x=U0^ x, (A.68)C=U0C= [ c1;; cK1]. (A.69)It follows thatI(X;R) =I~X;R'~IG=12lndetG(^ x)2e^ x+H(~X), (A.70)G(^ x) =N^W^^WT+IK, (A.71)^W=1=2UTWA1=2=aqK11IKK0C=qK10IKK0C, (A.72)16Published as a conference paper at ICLR 2017where IKK0is aKK0diagonal matrix with value 1on the diagonal and^=2, (A.73)= diag ((^y1);;(^yK1)), (A.74)(^yk) =a1@gk(^yk)@^yk, (A.75)gk(^yk) = 2q~f(^yk;~k), (A.76)^yk=a1yk=cTk^ x,k= 1;;K1. (A.77)Then we havedet (G(^ x)) = detNK10C^CT+IK0. (A.78)For largeNandK0=N!0, we havedet (G(^ x))det (J(^ x)) = detNK10C^CT, (A.79)~IG~IF=QK2ln (2e)K02ln (") +H(~X), (A.80)Q=12DlndetC^CTE^ x, (A.81)"=K0N. (A.82)Hence we can state the optimization problem as:minimizeQ[C] =12DlndetC^CTE^ x, (A.83)subject to CCT=IK0. (A.84)The gradient from (A.83) is given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (A.85)where C= [c1;;cK1],!= (!1;;!K1)T, and!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1. (A.86)In the following we will discuss how to get the optimal solution of Cfor two specific cases.A.4 A LGORITHMS FOR OPTIMIZATION OBJECTIVE FUNCTIONA.4.1 A LGORITHM 1:K0=K1NowCCT=CTC=IK1, then by Eq. (A.83) we haveQ1[C] =*K1Xk=1ln ((^yk))+^ x, (A.87)dQ1[C]dC=^ x!T^ x, (A.88)!k=0(^yk)(^yk),k= 1;;K1. (A.89)Under the orthogonality constraints (Eq. A.84), we can use the following update rule for learning C(Edelman et al., 1998; Amari, 1999):Ct+1=Ct+tdCtdt, (A.90)dCtdt=dQ1[Ct]dCt+CtdQ1[Ct]dCtTCt, (A.91)17Published as a conference paper at ICLR 2017where the learning rate parameter tchanges with the iteration count t,t= 1;;tmax. Here wecan use the empirical average to approximate the integral in (A.88) (see Eq. A.12). We can alsoapply stochastic gradient descent (SGD) method for online updating of Ct+1in (A.90).The orthogonality constraint (Eq. A.84) can accelerate the convergence rate. In practice, the orthog-onal constraint (A.84) for objective function (A.83) is not strictly necessary in this case. We cancompletely discard this constraint condition and considerminimizeQ2[C] =*K1Xk=1ln ((^yk))+^ x12lndetCTC, (A.92)where we assume rank ( C) =K1=K0. If we letdCdt=CCTdQ2[C]dC, (A.93)thenTrdQ2[C]dCdCTdt=TrCTdQ2[C]dCdQ2[C]dCTC0. (A.94)Therefore we can use an update rule similar to Eq. A.90 for learning C. In fact, the method can alsobe extended to the case K0>K 1by using the same objective function (A.92).The learning rate parameter t(see A.90) is updated adaptively, as follows. First, calculatet=vtt,t= 1;;tmax, (A.95)t=1K1K1Xk=1krCt(:;k)kkCt(:;k)k, (A.96)andCt+1by (A.90) and (A.91), then calculate the value Q1Ct+1. IfQ1Ct+1<Q 1[Ct], thenletvt+1 vt, continue for the next iteration; otherwise, let vt vt,t vt=tand recalculateCt+1andQ1Ct+1. Here 0< v1<1and0< < 1are set as constants. After getting Ct+1for each update, we employ a Gram–Schmidt orthonormalization process for matrix Ct+1, wherethe orthonormalization process can accelerate the convergence. However, we can discard the Gram–Schmidt orthonormalization process after iterative t0(>1) epochs for more accurate optimizationsolution C. In this case, the objective function is given by the Eq. (A.92). We can also furtheroptimize parameter bby gradient descent.WhenK0=K1, the objective function Q2[C]in Eq. (A.92) without constraint is the same as theobjective function of infomax ICA (IICA) (Bell & Sejnowski, 1995; 1997), and as a consequencewe should get the same optimal solution C. Hence, in this sense, the IICA may be regarded as aspecial case of our method. Our method has a wider range of applications and can handle moregeneric situations. Our model is derived by neural populations with a huge number of neurons and itis not restricted to additive noise model. Moreover, our method has a faster convergence rate duringtraining than IICA (see Section 3).A.4.2 A LGORITHM 2:K0K1In this case, it is computationally expensive to update Cby using the gradient of Q(see Eq. A.85),since it needs to compute the inverse matrix for every ^ x. Here we provide an alternative method forlearning the optimal C. First, we consider the following inequalities.18Published as a conference paper at ICLR 2017Proposition 2. The following inequations hold,12DlndetC^CTE^ x12lndetCD^E^ xCT, (A.97)lndetCCT^ xlndetChi^ xCT(A.98)12lndetChi2^ xCT(A.99)12lndetCD^E^ xCT, (A.100)lndetCCT12lndetC^CT, (A.101)where C2RK0K1,K0K1, andCCT=IK0.Proof. Functions lndetCD^E^ xCTandlndetChi^ xCTare concave functions aboutp(^ x)(see the proof of Proposition 5.2. in Huang & Zhang, 2017), which fact establishes inequalities(A.97) and (A.98).Next we will prove the inequality (A.101). By SVD, we haveC=UDVT, (A.102)where Uis aK0K0unitary orthogonal matrix, V= [ v1; v2;; vK1]is anK1K1unitaryorthogonal matrix, and Dis anK0K1rectangular diagonal matrix with K0positive real numberson the diagonal. By the matrix Hadamard’s inequality and Cauchy–Schwarz inequality we havedetCCTCCTdetC^CT1= detDVTCTCVDTDDT1= detVT1CTCV1= detCV12K0Yk=1CV12k;kK0Yk=1CCT2k;kVT1V12k;k= 1, (A.103)where V1= [ v1; v2;; vK0]2RK1K0. The last equality holds because of CCT=IK0andVT1V1=IK0. This establishes inequality (A.101) and the equality holds if and only if K0=K1orCV1=IK0.Similarly, we get inequality (A.99):lndetChi^ xCT12lndetChi2^ xCT. (A.104)By Jensen’s inequality, we haveh(^yk)i2^ xD(^yk)2E^ x,8k= 1;;K1. (A.105)Then it follows from (A.105) that inequality (A.100) holds:12lndetChi2^ xCT12lndetCD^E^ xCT. (A.106)19Published as a conference paper at ICLR 2017This completes the proof of Proposition 2 . ByProposition 2, ifK0=K1then we get12Dlndet^E^ x12lndetD^E^ x, (A.107)hln (det ( ))i^ xln (det (hi^ x)) (A.108)=12lndethi2^ x(A.109)12lndetD^E^ x, (A.110)ln (det ( )) =12lndet^. (A.111)On the other hand, it follows from (A.81) and Proposition 2 thatlndetCCT^ xQ12lndetCD^E^ xCT, (A.112)lndetCCT^ x^Q12lndetCD^E^ xCT. (A.113)Hence we can see that ^Qis close toQ(see A.81). Moreover, it follows from the Cauchy–Schwarzinequality thatD()k;kE^ x=h(^yk)i^ykZ(^yk)2d^ykZp(^yk)2d^yk1=2, (A.114)wherek= 1;;K1, the equality holds if and only if the following holds:p(^yk) =(^yk)R(^yk)d^yk,k= 1;;K1, (A.115)which is the similar to Eq. (A.63).SinceI(X;R) =I(Y;R)(seeProposition 1), by maximizing I(X;R)we hope the equality ininequality (A.61) and equality (A.63) hold, at least approximatively. On the other hand, letCopt= arg minCQ[C] = arg maxCDlndet(C^CT)E^ x, (A.116)^Copt= arg minC^Q[C] = arg maxClndetChi2^ xCT, (A.117)Coptand^Coptmake (A.63) and (A.115) to hold true, which implies that they are the same optimalsolution: Copt=^Copt.Therefore, we can use the following objective function ^Q[C]as a substitute for Q[C]and write theoptimization problem as:minimize ^Q[C] =12lndetChi2^ xCT, (A.118)subject to CCT=IK0. (A.119)The update rule (A.90) may also apply here and a modified algorithm similar to Algorithm 1 maybe used for parameter learning.A.5 S UPPLEMENTARY EXPERIMENTSA.5.1 Q UANTITATIVE METHODS FOR COMPARISONTo quantify the efficiency of learning representations by the above algorithms, we calculate the co-efficient entropy (CFE) for estimating coding cost as follows (Lewicki & Olshausen, 1999; Lewicki& Sejnowski, 2000):yk= wTk x,k= 1;;K1, (A.120)=K1PK1k=1k wkk, (A.121)20Published as a conference paper at ICLR 2017where xis defined by Eq. (A.68), and wkis the corresponding optimal filter. To estimate theprobability density of coefficients qk(yk)(k= 1;;K1) from theMtraining samples, we applythe kernel density estimation for qk(yk)and use a normal kernel with an adaptive optimal windowwidth. Then we define the CFE hash=1K1K1Xk=1Hk(Yk), (A.122)Hk(Yk) =Pnqk(n) log2qk(n), (A.123)whereqk(yk)is quantized as discrete qk(n)andis the step size.Methods such as IICA and SRBM as well as our methods have feedforward structures in whichinformation is transferred directly through a nonlinear function, e.g., the sigmoid function. Wecan use the amount of transmitted information to measure the results learned by these methods.Consider a neural population with Nneurons, which is a stochastic system with nonlinear transferfunctions. We chose a sigmoidal transfer function and Gaussian noise with standard deviation set to1as the system noise. In this case, from (1), (A.8) and (A.11), we see that the approximate MI IGisequivalent to the case of the Poisson neuron model. It follows from (A.70)–(A.82) thatI(X;R) =I~X;R=H(~X)H~XjR'~IG=H(~X)h1, (A.124)H~XjR'h1=12lndet12eNK10C^CT+IK0^ x, (A.125)where we set N= 106. A good representation should make the MI I(X;R)as big as possible.Equivalently, for the same inputs, a good representation should make the conditional entropy (CDE)H~XjR(orh1) as small as possible.(a) (b) (c)(d) (e) (f)Figure 4: Comparison of basis vectors obtained by our method and other methods. Panel ( a)–(e)correspond to panel ( a)–(e) in Figure 2, where the basis vectors are given by (A.130). The basisvectors in panel ( f) are learned by MBDL and given by (A.127).21Published as a conference paper at ICLR 2017A.5.2 C OMPARISON OF BASIS VECTORSWe compared our algorithm with an up-to-date sparse coding algorithm, the mini-batch dictionarylearning (MBDL) as given in (Mairal et al., 2009; 2010) and integrated in Python library, i.e. scikit-learn. The input data was the same as the above, i.e. 105nature image patches preprocessed by theZCA whitening filters.We denotes the optimal dictionary learned by MBDL as B2RKK1for which each columnrepresents a basis vector. Now we havexU1=2UTBy=~By, (A.126)~B=U1=2UTB, (A.127)where y= (y1;;yK1)Tis the coefficient vector.Similarly, we can obtain a dictionary from the filter matrix C. Suppose rank ( C) =K0K1, thenit follows from (A.64) that^ x=aCCT1Cy. (A.128)By (A.66) and (A.128), we getxBy=aBCT1=20UT0x, (A.129)B=a1U01=20CCT1C= [b1;;bK1], (A.130)where y=WTx=aCT1=20UT0x, the vectors b1;;bK1can be regarded as the basis vectorsand the strict equality holds when K0=K1=K. Recall that X= [x1,,xM] =US~VT(see Eq. A.49) and Y= [y1,,yM] =WTX=apM1CT~VT0, then we get X=BY =pM1U01=20~VT0X. Hence, Eq. (A.129) holds.The basis vectors shown in Figure 4(a)–4(e) correspond to filters in Figure 2(a)–2(e). And Fig-ure 4(f) illustrates the optimal dictionary ~Blearned by MBDL, where we set the regularization pa-rameter as= 1:2=pK, the batch size as 50and the total number of iterations to perform as 20000 ,which took about 3hours for training. From Figure 4 we see that these basis vectors obtained by theabove algorithms have local Gabor-like shapes except for those by SRBM. If rank( B) =K=K1,then the matrix BTcan be regarded as a filter matrix like matrix C(see Eq. A.69). However,from the column vector of matrix BTwe cannot find any local Gabor-like filter that resembles thefilters shown in Figure 2. Our algorithm has less computational cost and a much faster convergencerate than the sparse coding algorithm. Moreover, the sparse coding method involves a dynamicgenerative model that requires relaxation and is therefore unsuitable for fast inference, whereas thefeedforward framework of our model is easy for inference because it only requires evaluating thenonlinear tuning functions.A.5.3 L EARNING OVERCOMPLETE BASESWe have trained our model on the Olshausen’s nature image patches with a highly overcompletesetup by optimizing the objective (A.118) by Alg.2 and got Gabor-like filters. The results of 400typical filters chosen from 1024 output filters are displayed in Figure 5(a) and corresponding base(see Eq. A.130) are shown in Figure 5(b). Here the parameters are K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8, and= 0:98(see A.52), from which we got rank ( B) =K0= 82 . Comparedto the ICA-like results in Figure 2(a)–2(c), the average size of Gabor-like filters in Figure 5(a) isbigger, indicating that the small noise-like local structures in the images have been filtered out.We have also trained our model on 60,000 images of handwritten digits from MNIST dataset (LeCunet al., 1998) and the resultant 400typical optimal filters and bases are shown in Figure 5(c) andFigure 5(d), respectively. All parameters were the same as Figure 5(a) and Figure 5(b): K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8and= 0:98, from which we got rank ( B) =K0= 183 . Fromthese figures we can see that the salient features of the input images are reflected in these filters andbases. We could also get the similar overcomplete filters and bases by SRBM and MBDL. However,the results depended sensitively on the choice of parameters and the training took a long time.22Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 5: Filters and bases obtained from Olshausen’s image dataset and MNIST dataset by Al-gorithm 2. ( a) and ( b):400typical filters and the corresponding bases obtained from Olshausen’simage dataset, where K0= 82 andK1= 1024 . (c) and ( d):400typical filters and the correspondingbases obtained from the MNIST dataset, where K0= 183 andK1= 1024 .Figure 6 shows that CFE as a function of training time for Alg.2, where Figure 6(a) corresponds toFigure 5(a)-5(b) for learning nature image patches and Figure 6(b) corresponds to Figure 5(c)-5(d)for learning MNIST dataset. We set parameters tmax= 100 and= 0:8for all experiments andvaried parameter v1for each experiment, with v1= 0:2,0:4,0:6or0:8. These results indicate a fastconvergence rate for training on different datasets. Generally, the convergence is insensitive to thechange of parameter v1.We have also performed additional tests on other image datasets and got similar results, confirmingthe speed and robustness of our learning method. Compared with other methods, e.g., IICA, FICA,MBDL, SRBM or sparse autoencoders etc., our method appeared to be more efficient and robust forunsupervised learning of representations. We also found that complete and overovercomplete filtersand bases learned by our methods had local Gabor-like shapes while the results by SRBM or MBDLdid not have this property.23Published as a conference paper at ICLR 2017100101102time (seconds)1.751.81.851.91.95coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8(a)100101102time (seconds)1.61.71.81.922.1coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8 (b)Figure 6: CFE as a function of training time for Alg.2, with v1= 0:2,0:4,0:6or0:8. In allexperiments parameters were set to tmax= 100 ,t0= 50 and= 0:8. (a): corresponding toFigure 5(a) or Figure 5(b). ( b): corresponding to Figure 5(c) or Figure 5(d).A.5.4 I MAGE DENOISINGSimilar to the sparse coding method applied to image denoising (Elad & Aharon, 2006), our method(see Eq. A.130) can also be applied to image denoising, as shown by an example in Figure 7. Thefilters or bases were learned by using 77image patches sampled from the left half of the image, andsubsequently used to reconstruct the right half of the image which was distorted by Gaussian noise.A common practice for evaluating the results of image denoising is by looking at the differencebetween the reconstruction and the original image. If the reconstruction is perfect the differenceshould look like Gaussian noise. In Figure 7(c) and 7(d) a dictionary ( 100bases) was learned byMBDL and orthogonal matching pursuit was used to estimate the sparse solution.1For our method(shown in Figure 7(b)), we first get the optimal filters parameter W, a low rank matrix ( K0<K ),then from the distorted image patches xm(m= 1;;M) we get filter outputs ym=WTxmand the reconstruction xm=Bym(parameters: = 0:975andK0=K1= 14 ). As can be seenfrom Figure 7, our method worked better than dictionary learning, although we only used 14basescompared with 100bases used by dictionary learning. Our method is also more efficient. We canget better optimal bases Bby a generative model using our infomax approach (details not shown).1Python source code is available at http://scikit-learn.org/stable/ downloads/plot image denoising.py24Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: Image denoising. ( a): the right half of the original image is distorted by Gaussian noiseand the norm of the difference between the distorted image and the original image is 23:48. (b):image denoising by our method (Algorithm 1), with 14bases used. ( c) and ( d): image denoisingusing dictionary learning, with 100bases used.25
BkMfqzUNx
SkYbF1slg
ICLR.cc/2017/conference/-/paper549/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes a hierarchical infomax method. My comments are as follows: \n\n(1) First of all, this paper is 21 pages without appendix, and too long as a conference proceeding. Therefore, it is not easy for readers to follow the paper. The authors should make this paper as compact as possible while maintaining the important message. \n\n(2) One of the main contribution in this paper is to find a good initialization point by maximizing I(X;R). However, it is unclear why maximizing I(X;\\breve{Y}) is good for maximizing I(X;R) because Proposition 2.1 shows that I(X;\\breve{Y}) is an \u201cupper\u201d bound of I(X;R) (When it is difficult to directly maximize a function, people often maximize some tractable \u201clower\u201d bound of it).\n\nMinor comments:\n(1) If (2.11) is approximation of (2.8), \u201c\\approx\u201d should be used. \n\n(2) Why K_1 instead of N in Eq.(2.11)?\n\n(3) In Eq.(2.12), H(X) should disappear?\n\n(4) Can you divide Section 3 into subsections?\n\n", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}
review
2017
ICLR.cc/2017/conference
An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax
["Wentao Huang", "Kechen Zhang"]
A framework is presented for unsupervised learning of representations based on infomax principle for large-scale neural populations. We use an asymptotic approximation to the Shannon's mutual information for a large neural population to demonstrate that a good initial approximation to the global information-theoretic optimum can be obtained by a hierarchical infomax method. Starting from the initial solution, an efficient algorithm based on gradient descent of the final objective function is proposed to learn representations from the input datasets, and the method works for complete, overcomplete, and undercomplete bases. As confirmed by numerical experiments, our method is robust and highly efficient for extracting salient features from input datasets. Compared with the main existing methods, our algorithm has a distinct advantage in both the training speed and the robustness of unsupervised representation learning. Furthermore, the proposed method is easily extended to the supervised or unsupervised model for training deep structure networks.
["Unsupervised Learning", "Theory", "Deep learning"]
https://openreview.net/forum?id=SkYbF1slg
https://openreview.net/pdf?id=SkYbF1slg
https://openreview.net/forum?id=SkYbF1slg&noteId=BkMfqzUNx
Published as a conference paper at ICLR 2017ANINFORMATION -THEORETIC FRAMEWORK FORFAST AND ROBUST UNSUPERVISED LEARNING VIANEURAL POPULATION INFOMAXWentao Huang & Kechen ZhangDepartment of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimore, MD 21205, USAfwhuang21,kzhang4 g@jhmi.eduABSTRACTA framework is presented for unsupervised learning of representations based oninfomax principle for large-scale neural populations. We use an asymptotic ap-proximation to the Shannon’s mutual information for a large neural population todemonstrate that a good initial approximation to the global information-theoreticoptimum can be obtained by a hierarchical infomax method. Starting from theinitial solution, an efficient algorithm based on gradient descent of the final ob-jective function is proposed to learn representations from the input datasets, andthe method works for complete, overcomplete, and undercomplete bases. As con-firmed by numerical experiments, our method is robust and highly efficient forextracting salient features from input datasets. Compared with the main existingmethods, our algorithm has a distinct advantage in both the training speed and therobustness of unsupervised representation learning. Furthermore, the proposedmethod is easily extended to the supervised or unsupervised model for trainingdeep structure networks.1 I NTRODUCTIONHow to discover the unknown structures in data is a key task for machine learning. Learning goodrepresentations from observed data is important because a clearer description may help reveal theunderlying structures. Representation learning has drawn considerable attention in recent years(Bengio et al., 2013). One category of algorithms for unsupervised learning of representations isbased on probabilistic models (Lewicki & Sejnowski, 2000; Hinton & Salakhutdinov, 2006; Leeet al., 2008), such as maximum likelihood (ML) estimation, maximum a posteriori (MAP) probabil-ity estimation, and related methods. Another category of algorithms is based on reconstruction erroror generative criterion (Olshausen & Field, 1996; Aharon et al., 2006; Vincent et al., 2010; Mairalet al., 2010; Goodfellow et al., 2014), and the objective functions usually involve squared errors withadditional constraints. Sometimes the reconstruction error or generative criterion may also have aprobabilistic interpretation (Olshausen & Field, 1997; Vincent et al., 2010).Shannon’s information theory is a powerful tool for description of stochastic systems and couldbe utilized to provide a characterization for good representations (Vincent et al., 2010). However,computational difficulties associated with Shannon’s mutual information (MI) (Shannon, 1948) havehindered its wider applications. The Monte Carlo (MC) sampling (Yarrow et al., 2012) is a conver-gent method for estimating MI with arbitrary accuracy, but its computational inefficiency makes itunsuitable for difficult optimization problems especially in the cases of high-dimensional input stim-uli and large population networks. Bell and Sejnowski (Bell & Sejnowski, 1995; 1997) have directlyapplied the infomax approach (Linsker, 1988) to independent component analysis (ICA) of data withindependent non-Gaussian components assuming additive noise, but their method requires that thenumber of outputs be equal to the number of inputs. The extensions of ICA to overcomplete orundercomplete bases incur increased algorithm complexity and difficulty in learning of parameters(Lewicki & Sejnowski, 2000; Kreutz-Delgado et al., 2003; Karklin & Simoncelli, 2011).1Published as a conference paper at ICLR 2017Since Shannon MI is closely related to ML and MAP (Huang & Zhang, 2017), the algorithms ofrepresentation learning based on probabilistic models should be amenable to information-theoretictreatment. Representation learning based on reconstruction error could be accommodated also byinformation theory, because the inverse of Fisher information (FI) is the Cram ́er-Rao lower boundon the mean square decoding error of any unbiased decoder (Rao, 1945). Hence minimizing thereconstruction error potentially maximizes a lower bound on the MI (Vincent et al., 2010).Related problems arise also in neuroscience. It has long been suggested that the real nervous sys-tems might approach an information-theoretic optimum for neural coding and computation (Barlow,1961; Atick, 1992; Borst & Theunissen, 1999). However, in the cerebral cortex, the number of neu-rons is huge, with about 105neurons under a square millimeter of cortical surface (Carlo & Stevens,2013). It has often been computationally intractable to precisely characterize information codingand processing in large neural populations.To address all these issues, we present a framework for unsupervised learning of representationsin a large-scale nonlinear feedforward model based on infomax principle with realistic biologicalconstraints such as neuron models with Poisson spikes. First we adopt an objective function basedon an asymptotic formula in the large population limit for the MI between the stimuli and the neuralpopulation responses (Huang & Zhang, 2017). Since the objective function is usually nonconvex,choosing a good initial value is very important for its optimization. Starting from an initial value, weuse a hierarchical infomax approach to quickly find a tentative global optimal solution for each layerby analytic methods. Finally, a fast convergence learning rule is used for optimizing the final objec-tive function based on the tentative optimal solution. Our algorithm is robust and can learn complete,overcomplete or undercomplete basis vectors quickly from different datasets. Experimental resultsshowed that the convergence rate of our method was significantly faster than other existing methods,often by an order of magnitude. More importantly, the number of output units processed by ourmethod can be very large, much larger than the number of inputs. As far as we know, no existingmodel can easily deal with this situation.2 M ETHODS2.1 A PPROXIMATION OF MUTUAL INFORMATION FOR NEURAL POPULATIONSSuppose the input xis aK-dimensional vector, x= (x1;;xK)T, the outputs of Nneurons aredenoted by a vector, r= (r1;;rN)T, where we assume Nis large, generally NK. Wedenote random variables by upper case letters, e.g., random variables XandR, in contrast to theirvector values xandr. The MI between XandRis defined by I(X;R) =Dlnp(xjr)p(x)Er;x, wherehir;xdenotes the expectation with respect to the probability density function (PDF) p(r;x).Our goal is to maxmize MI I(X;R)by finding the optimal PDF p(rjx)under some constraintconditions, assuming that p(rjx)is characterized by a noise model and activation functions f(x;n)with parameters nfor then-th neuron (n= 1;;N). In other words, we optimize p(rjx)bysolving for the optimal parameters n. Unfortunately, it is intractable in most cases to solve for theoptimal parameters that maximizes I(X;R). However, if p(x)andp(rjx)are twice continuouslydifferentiable for almost every x2RK, then for large Nwe can use an asymptotic formula toapproximate the true value of I(X;R)with high accuracy (Huang & Zhang, 2017):I(X;R)'IG=12lndetG(x)2ex+H(X), (1)where det ()denotes the matrix determinant and H(X) =hlnp(x)ixis the stimulus entropy,G(x) =J(x) +P(x), (2)J(x) =@2lnp(rjx)@x@xTrjx, (3)P(x) =@2lnp(x)@x@xT. (4)Assuming independent noises in neuronal responses, we have p(rjx) =QNn=1p(rnjx;n),and the Fisher information matrix becomes J(x)NPK1k=1kS(x;k), where S(x;k) =2Published as a conference paper at ICLR 2017D@lnp(rjx;k)@x@lnp(rjx;k)@xTErjxandk>0(k= 1;;K1) is the population density of param-eterk, withPK1k=1k= 1, and 1K1N(see Appendix A.1 for details). Since the cerebralcortex usually forms functional column structures and each column is composed of neurons with thesame properties (Hubel & Wiesel, 1962), the positive integer K1can be regarded as the number ofdistinct classes in the neural population.Therefore, given the activation function f(x;k), our goal becomes to find the optimal popula-tion distribution density kof parameter vector kso that the MI between the stimulus xand theresponse ris maximized. By Eq. (1), our optimization problem can be stated as follows:minimizeQG[fkg] =12hln (det ( G(x)))ix, (5)subject toK1Xk=1k= 1,k>0,8k= 1;;K1. (6)SinceQG[fkg]is a convex function of fkg(Huang & Zhang, 2017), we can readily find theoptimal solution for small Kby efficient numerical methods. For large K, however, finding anoptimal solution by numerical methods becomes intractable. In the following we will propose analternative approach to this problem. Instead of directly solving for the density distribution fkg, weoptimize the parameters fkgandfkgsimultaneously under a hierarchical infomax framework.2.2 H IERARCHICAL INFOMAXFor clarity, we consider neuron model with Poisson spikes although our method is easily applicableto other noise models. The activation function f(x;n)is generally a nonlinear function, such assigmoid and rectified linear unit (ReLU) (Nair & Hinton, 2010). We assume that the nonlinearfunction for the n-th neuron has the following form: f(x;n) =~f(yn;~n), whereyn=wTnx. (7)withwnbeing aK-dimensional weights vector, ~f(yn;~n)is a nonlinear function, n= (wTn;~Tn)Tand~nare the parameter vectors ( n= 1;;N).In general, it is very difficult to find the optimal parameters, n,n= 1;;N, for the followingreasons. First, the number of output neurons Nis very large, usually NK. Second, the activationfunctionf(x;n)is a nonlinear function, which usually leads to a nonconvex optimization problem.For nonconvex optimization problems, the selection of initial values often has a great influence onthe final optimization results. Our approach meets these challenges by making better use of the largenumber of neurons and by finding good initial values by a hierarchical infomax method.We divide the nonlinear transformation into two stages, mapping first from xtoyn(n= 1;;N),and then from ynto~f(yn;~n), whereyncan be regarded as the membrane potential of the n-thneuron, and ~f(yn;~n)as its firing rate. As with the real neurons, we assume that the membranepotential is corrupted by noise:Yn=Yn+Zn, (8)whereZn N0,2is a normal distribution with mean 0and variance 2. Then the meanmembrane potential of the k-th class subpopulation with Nk=Nkneurons is given byYk=1NkNkXn=1Ykn=Yk+Zk,k= 1;;K1, (9)ZkN(0; N1k2). (10)Define vectors y= (y1;;yN)T, y= (y1;;yK1)Tandy= (y1;;yK1)T, whereyk=wTkx(k= 1;;K1). Notice that yn(n= 1;;N) is also divided into K1classes, the sameas forrn. If we assume f(x;k) = ~f(yk;~k), i.e. assuming an additive Gaussian noise for yn(see Eq. 9), then the random variables X,Y,Y,YandRform a Markov chain, denoted byX!Y!Y!Y!R(see Figure 1), and we have the following proposition (see AppendixA.2).3Published as a conference paper at ICLR 2017X Y R Y YW X Y + Z( T1/N k f( )Yxxxy-y-y-ym1(ymNk(ymNkK1rmNkymi(rN1 yN1(yN(rN yNyni(yn1(yn1yniymiym1yN1ri yiy1yi(y1(rnirn1rmirm1r11kKk1Figure 1: A neural network interpretaton for random variables X,Y,Y,Y,R.Proposition 1. With the random variables X,Y,Y,Y,Rand Markov chain X!Y!Y!Y!R, the following equations hold,I(X;R) =I(Y;R)I(Y;R)I(Y;R), (11)I(X;R)I(X;Y) =I(X;Y)I(X;Y), (12)and for large Nk(k= 1;;K1),I(Y;R)'I(Y;R)'I(Y;R) =I(X;R), (13)I(X;Y)'I(X;Y) =I(X;Y). (14)A major advantage of incorporating membrane noise is that it facilitates finding the optimal solutionby using the infomax principle. Moreover, the optimal solution obtained this way is more robust;that is, it discourages overfitting and has a strong ability to resist distortion. With vanishing noise2!0, we have Yk!Yk,~f(yk;~k)'~f(yk;~k) =f(x;k), so that Eqs. (13) and (14) hold asin the case of large Nk.To optimize MI I(Y;R), the probability distribution of random variable Y,p(y), needs to be de-termined, i.e. maximizing I(Y;R)aboutp(y)under some constraints should yield an optimaldistribution: p(y) = arg max p(y)I(Y;R). LetC= maxp(y)I(Y;R)be the channel capacity ofneural population coding, and we always have I(X;R)C (Huang & Zhang, 2017). To find asuitable linear transformation from XtoYthat is compatible with this distribution p(y), a reason-able choice is to maximize I(X;Y) (I(X;Y)), where Yis a noise-corrupted version of Y. Thisimplies minimum information loss in the first transformation step. However, there may exist manytransformations from XtoYthat maximize I(X;Y)(see Appendix A.3.1). Ideally, if we can finda transformation that maximizes both I(X;Y)andI(Y;R)simultaneously, then I(X;R)reachesits maximum value: I(X;R) = maxp(y)I(Y;R) =C.From the discussion above we see that maximizing I(X;R)can be divided into two steps,namely, maximizing I(X;Y)and maximizing I(Y;R). The optimal solutions of maxI(X;Y)andmaxI(Y;R)will provide a good initial approximation that tend to be very close to the optimalsolution of maxI(X;R).Similarly, we can extend this method to multilayer neural population networks. For example, a two-layer network with outputs R(1)andR(2)form a Markov chain, X!~R(1)!R(1)!R(1)!4Published as a conference paper at ICLR 2017R(2), where random variable ~R(1)is similar to Y, random variable R(1)is similar to Y, and R(1)is similar to Yin the above. Then we can show that the optimal solution of maxI(X;R(2))canbe approximated by the solutions of maxI(X;R(1))andmaxI(~R(1);R(2)), withI(~R(1);R(2))'I(R(1);R(2)).More generally, consider a highly nonlinear feedforward neural network that maps the input xtooutput z, with z=F(x;) =hLh1(x), wherehl(l= 1;;L) is a linear or nonlinearfunction (Montufar et al., 2014). We aim to find the optimal parameter by maximizing I(X;Z). Itis usually difficult to solve the optimization problem when there are many local extrema for F(x;).However, if each function hlis easy to optimize, then we can use the hierarchical infomax methoddescribed above to get a good initial approximation to its global optimization solution, and go fromthere to find the final optimal solution. This information-theoretic consideration from the neuralpopulation coding point of view may help explain why deep structure networks with unsupervisedpre-training have a powerful ability for learning representations.2.3 T HEOBJECTIVE FUNCTIONThe optimization processes for maximizing I(X;Y)and maximizing I(Y;R)are discussed in detailin Appendix A.3. First, by maximizing I(X;Y)(see Appendix A.3.1 for details), we can get theoptimal weight parameter wk(k= 1;;K1, see Eq. 7) and its population density k(see Eq. 6)which satisfyW= [w1;;wK1] =aU01=20C, (15)1==K1=K11, (16)wherea=qK1K10,C= [c1;;cK1]2RK0K1,CCT=IK0,IK0is aK0K0identitymatrix with integer K02[1;K], the diagonal matrix 02RK0K0and matrix U02RKK0aregiven in (A.44) and (A.45), with K0given by Eq. (A.52). Matrices 0andU0can be obtainedbyandUwithUT0U0=IK0andU00UT0UUTxxTx(see Eq. A.23). Theoptimal weight parameter wk(15) means that the input variable xmust first undergo a whitening-like transformation ^ x=1=20UT0x, and then goes through the transformation y=aCT^ x, withmatrix Cto be optimized below. Note that weight matrix Wsatisfies rank(W) = min(K0;K1),which is a low rank matrix, and its low dimensionality helps reduce overfitting during training (seeAppendix A.3.1).By maximizing I(Y;R)(see Appendix A.3.2), we further solve the the optimal parameters ~kforthe nonlinear functions ~f(yk;~k),k= 1;;K1. Finally, the objective function for our optimiza-tion problem (Eqs. 5 and 6) turns into (see Appendix A.3.3 for details):minimizeQ[C] =12DlndetC^CTE^ x, (17)subject to CCT=IK0, (18)where ^= diag(^y1)2;;(^yK1)2,(^yk) =a1j@gk(^yk)=@^ykj(k= 1;;K1),gk(^yk) =2q~f(^yk;~k),^yk=a1yk=cTk^ x, and^ x=1=20UT0x. We apply the gradient descent method tooptimize the objective function, with the gradient of Q[C]given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (19)where!= (!1;;!K1)T,!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1.WhenK0=K1(orK0> K 1), the objective function Q[C]can be reduced to a simpler form,and its gradient is also easy to compute (see Appendix A.4.1). However, when K0< K 1, it iscomputationally expensive to update Cby applying the gradient of Q[C]directly, since it requiresmatrix inversion for every ^ x. We use another objective function ^Q[C](see Eq. A.118) which is anapproximation to Q[C], but its gradient is easier to compute (see Appendix A.4.2). The function5Published as a conference paper at ICLR 2017^Q[C]is the approximation of Q[C], ideally they have the same optimal solution for the parameterC.Usually, for optimizing the objective in Eq. 17, the orthogonality constraint (Eq. 18) is unnecessary.However, this orthogonality constraint can accelerate the convergence rate if we employ it for theinitial iteration to update C(see Appendix A.5).3 E XPERIMENTAL RESULTSWe have applied our methods to the natural images from Olshausen’s image dataset (Olshausen &Field, 1996) and the images of handwritten digits from MNIST dataset (LeCun et al., 1998) usingMatlab 2016a on a computer with 12 Intel CPU cores (2.4 GHz). The gray level of each raw imagewas normalized to the range of 0to1.Mimage patches with size ww=Kfor training wererandomly sampled from the images. We used the Poisson neuron model with a modified sigmoidaltuning function ~f(y;~) =14(1+exp(yb))2, withg(y) = 2q~f(y;~) =11+exp(yb), where~= (;b)T. We obtained the initial values (see Appendix A.3.2): b0= 0and01:81qK1K10.For our experiments, we set = 0:50for iteration epoch t= 1;;t0and=0fort=t0+ 1;;tmax, wheret0= 50 .Firstly, we tested the case of K=K0=K1= 144 and randomly sampled M= 105image patcheswith size 1212from the Olshausen’s natural images, assuming that N= 106neurons were dividedintoK1= 144 classes and= 1(see Eq. A.52 in Appendix). The input patches were preprocessedby the ZCA whitening filters (see Eq. A.68). To test our algorithms, we chose the batch size to beequal to the number of training samples M, although we could also choose a smaller batch size. Weupdated the matrix Cfrom a random start, and set parameters tmax= 300 ,v1= 0:4, and= 0:8for all experiments.In this case, the optimal solution Clooked similar to the optimal solution of IICA (Bell & Sejnowski,1997). We also compared with the fast ICA algorithm (FICA) (Hyv ̈arinen, 1999), which is fasterthan IICA. We also tested the restricted Boltzmann machine (RBM) (Hinton et al., 2006) for aunsupervised learning of representations, and found that it could not easily learn Gabor-like filtersfrom Olshausen’s image dataset as trained by contrastive divergence. However, an improved methodby adding a sparsity constraint on the output units, e.g., sparse RBM (SRBM) (Lee et al., 2008) orsparse autoencoder (Hinton, 2010), could attain Gabor-like filters from this dataset. Similar resultswith Gabor-like filters were also reproduced by the denoising autoencoders (Vincent et al., 2010),which method requires a careful choice of parameters, such as noise level, learning rate, and batchsize.In order to compare our methods, i.e. Algorithm 1 (Alg.1, see Appendix A.4.1) and Algorithm2 (Alg.2, see Appendix A.4.2), with other methods, i.e. IICA, FICA and SRBM, we implementedthese algorithms using the same initial weights and the same training data set (i.e. 105image patchespreprocessed by the ZCA whitening filters). To get a good result by IICA, we must carefully selectthe parameters; we set the batch size as 50, the initial learning rate as 0:01, and final learning rateas0:0001 , with an exponential decay with the epoch of iterations. IICA tends to have a fasterconvergence rate for a bigger batch size but it may become harder to escape local minima. ForFICA, we chose the nonlinearity function f(u) = log cosh( u)as contrast function, and for SRBM,we set the sparseness control constant pas0:01and0:03. The number of epoches for iterations wasset to 300for all algorithms. Figure 2 shows the filters learned by our methods and other methods.Each filter in Figure 2(a) corresponds to a column vector of matrix C(see Eq. A.69), where eachvector for display is normalized by ck ck=max(jc1;kj;;jcK;kj),k= 1;;K1. The resultsin Figures 2(a), 2(b) and 2(c) look very similar to one another, and slightly different from the resultsin Figure 2(d) and 2(e). There are no Gabor-like filters in Figure 2(f), which corresponds to SRBMwithp= 0:03.Figure 3 shows how the coefficient entropy (CFE) (see Eq. A.122) and the conditional entropy(CDE) (see Eq. A.125) varied with training time. We calculated CFE and CDE by sampling onceevery 10epoches from a total of 300epoches. These results show that our algorithms had a fastconvergence rate towards stable solutions while having CFE and CDE values similar to the algorithmof IICA, which converged much more slowly. Here the values of CFE and CDE should be as small6Published as a conference paper at ICLR 2017(a) (b) (c)(d) (e) (f)Figure 2: Comparison of filters obtained from 105natural image patches of size 12 12 by ourmethods (Alg.1 and Alg.2) and other methods. The number of output filters was K1= 144 . (a):Alg.1. ( b): Alg.2. ( c): IICA. ( d): FICA. ( e): SRBM (p= 0:01). (f): SRBM (p= 0:03).100101102time (seconds)1.81.851.91.952coefficient entropy (bits)Alg.1Alg.2IICAFICASRBM (p = 0.01)SRBM (p = 0.03)(a)100101102time (seconds)-400-350-300-250-200-150conditional entropy (bits)Alg.1Alg.2IICA (b)100101102time (seconds)-200-1000100200300conditional entropy (bits)SRBM (p = 0.01)SRBM (p = 0.03)SRBM (p = 0.05)SRBM (p = 0.10) (c)Figure 3: Comparison of quantization effects and convergence rate by coefficient entropy (seeA.122) and conditional entropy (see A.125) corresponding to training results (filters) shown in Fig-ure 2. The coefficient entropy (panel a) and conditional entropy (panel bandc) are shown as afunction of training time on a logarithmic scale. All experiments run on the same machine usingMatlab. Here we sampled once every 10epoches out of a total of 300epoches. We set epoch numbert0= 50 for Alg.1 and Alg.2 and the start time to 1second.as possible for a good representation learned from the same data set. Here we set epoch numbert0= 50 in our algorithms (see Alg.1 and Alg.2), and the start time was set to 1second. Thisexplains the step seen in Figure 3 (b) for Alg.1 and Alg.2 since the parameter was updated whenepoch number t=t0. FICA had a convergence rate close to our algorithms but had a big CFE,which is reflected by the quality of the filter results in Figure 2. The convergence rate and CFE forSRBM were close to IICA, but SRBM had a much bigger CDE than IICA, which implies that theinformation had a greater loss when passing through the system optimized by SRBM than by IICAor our methods.7Published as a conference paper at ICLR 2017From Figure 3(c) we see that the CDE (or MI I(X;R), see Eq. A.124 and A.125) decreases (orincreases) with the increase of the value of the sparseness control constant p. Note that a smallerpmeans sparser outputs. Hence, in this sense, increasing sparsity may result in sacrificing someinformation. On the other hand, a weak sparsity constraint may lead to failure of learning Gabor-like filters (see Figure 2(f)), and increasing sparsity has an advantage in reducing the impact ofnoise in many practical cases. Similar situation also occurs in sparse coding (Olshausen & Field,1997), which provides a class of algorithms for learning overcomplete dictionary representations ofthe input signals. However, its training is time consuming due to its expensive computational cost,although many new training algorithms have emerged (e.g. Aharon et al., 2006; Elad & Aharon,2006; Lee et al., 2006; Mairal et al., 2010). See Appendix A.5 for additional experimental results.4 C ONCLUSIONSIn this paper, we have presented a framework for unsupervised learning of representations via in-formation maximization for neural populations. Information theory is a powerful tool for machinelearning and it also provides a benchmark of optimization principle for neural information pro-cessing in nervous systems. Our framework is based on an asymptotic approximation to MI for alarge-scale neural population. To optimize the infomax objective, we first use hierarchical infomaxto obtain a good approximation to the global optimal solution. Analytical solutions of the hierarchi-cal infomax are further improved by a fast convergence algorithm based on gradient descent. Thismethod allows us to optimize highly nonlinear neural networks via hierarchical optimization usinginfomax principle.From the viewpoint of information theory, the unsupervised pre-training for deep learning (Hinton &Salakhutdinov, 2006; Bengio et al., 2007) may be reinterpreted as a process of hierarchical infomax,which might help explain why unsupervised pre-training helps deep learning (Erhan et al., 2010). Inour framework, a pre-whitening step can emerge naturally by the hierarchical infomax, which mightalso explain why a pre-whitening step is useful for training in many learning algorithms (Coateset al., 2011; Bengio, 2012).Our model naturally incorporates a considerable degree of biological realism. It allows the opti-mization of a large-scale neural population with noisy spiking neurons while taking into account ofmultiple biological constraints, such as membrane noise, limited energy, and bounded connectionweights. We employ a technique to attain a low-rank weight matrix for optimization, so as to reducethe influence of noise and discourage overfitting during training. In our model, many parametersare optimized, including the population density of parameters, filter weight vectors, and parametersfor nonlinear tuning functions. Optimizing all these model parameters could not be easily done bymany other methods.Our experimental results suggest that our method for unsupervised learning of representations hasobvious advantages in its training speed and robustness over the main existing methods. Our modelhas a nonlinear feedforward structure and is convenient for fast learning and inference. This simpleand flexible framework for unsupervised learning of presentations should be readily extended totraining deep structure networks. In future work, it would interesting to use our method to train deepstructure networks with either unsupervised or supervised learning.ACKNOWLEDGMENTSWe thank Prof. Honglak Lee for sharing Matlab code for algorithm comparison, Prof. Shan Tan fordiscussions and comments and Kai Liu for helping draw Figure 1. Supported by grant NIH-NIDCDR01 DC013698.REFERENCESAharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcompletedictionaries for sparse representation. Signal Processing, IEEE Transactions on , 54(11), 4311–4322.8Published as a conference paper at ICLR 2017Amari, S. (1999). Natural gradient learning for over- and under-complete bases in ica. NeuralComput. , 11(8), 1875–1883.Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing?Network: Comp. Neural. , 3(2), 213–251.Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. Sen-sory Communication , (pp. 217–234).Bell, A. J. & Sejnowski, T. J. (1995). An information-maximization approach to blind separationand blind deconvolution. Neural Comput. , 7(6), 1129–1159.Bell, A. J. & Sejnowski, T. J. (1997). The ”independent components” of natural scenes are edgefilters. Vision Res. , 37(23), 3327–3338.Bengio, Y . (2012). Deep learning of representations for unsupervised and transfer learning. Unsu-pervised and Transfer Learning Challenges in Machine Learning , 7, 19.Bengio, Y ., Courville, A., & Vincent, P. (2013). Representation learning: A review and new per-spectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 35(8), 1798–1828.Bengio, Y ., Lamblin, P., Popovici, D., Larochelle, H., et al. (2007). Greedy layer-wise training ofdeep networks. Advances in neural information processing systems , 19, 153.Borst, A. & Theunissen, F. E. (1999). Information theory and neural coding. Nature neuroscience ,2(11), 947–957.Carlo, C. N. & Stevens, C. F. (2013). Structural uniformity of neocortex, revisited. Proceedings ofthe National Academy of Sciences , 110(4), 1488–1493.Coates, A., Ng, A. Y ., & Lee, H. (2011). An analysis of single-layer networks in unsupervisedfeature learning. In International conference on artificial intelligence and statistics (pp. 215–223).Cortes, C. & Vapnik, V . (1995). Support-vector networks. Machine learning , 20(3), 273–297.Cover, T. M. & Thomas, J. A. (2006). Elements of Information, 2nd Edition . New York: Wiley-Interscience.Edelman, A., Arias, T. A., & Smith, S. T. (1998). The geometry of algorithms with orthogonalityconstraints. SIAM J. Matrix Anal. Appl. , 20(2), 303–353.Elad, M. & Aharon, M. (2006). Image denoising via sparse and redundant representations overlearned dictionaries. Image Processing, IEEE Transactions on , 15(12), 3736–3745.Erhan, D., Bengio, Y ., Courville, A., Manzagol, P.-A., Vincent, P., & Bengio, S. (2010). Why doesunsupervised pre-training help deep learning? The Journal of Machine Learning Research , 11,625–660.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., &Bengio, Y . (2014). Generative adversarial nets. In Advances in Neural Information ProcessingSystems (pp. 2672–2680).Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Momentum , 9(1),926.Hinton, G., Osindero, S., & Teh, Y .-W. (2006). A fast learning algorithm for deep belief nets. Neuralcomputation , 18(7), 1527–1554.Hinton, G. E. & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neuralnetworks. Science , 313(5786), 504–507.Huang, W. & Zhang, K. (2017). Information-theoretic bounds and approximations in neural popu-lation coding. Neural Comput, submitted, URL https://arxiv.org/abs/1611.01414 .9Published as a conference paper at ICLR 2017Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional archi-tecture in the cat’s visual cortex. The Journal of physiology , 160(1), 106–154.Hyv ̈arinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis.Neural Networks, IEEE Transactions on , 10(3), 626–634.Karklin, Y . & Simoncelli, E. P. (2011). Efficient coding of natural images with a population of noisylinear-nonlinear neurons. In Advances in neural information processing systems , volume 24 (pp.999–1007).Konstantinides, K. & Yao, K. (1988). Statistical analysis of effective singular values in matrix rankdetermination. Acoustics, Speech and Signal Processing, IEEE Transactions on , 36(5), 757–763.Kreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T. S., & Sejnowski, T. J. (2003).Dictionary learning algorithms for sparse representation. Neural computation , 15(2), 349–396.LeCun, Y ., Bottou, L., Bengio, Y ., & Haffner, P. (1998). Gradient-based learning applied to docu-ment recognition. Proceedings of the IEEE , 86(11), 2278–2324.Lee, H., Battle, A., Raina, R., & Ng, A. Y . (2006). Efficient sparse coding algorithms. In Advancesin neural information processing systems (pp. 801–808).Lee, H., Ekanadham, C., & Ng, A. Y . (2008). Sparse deep belief net model for visual area v2. InAdvances in neural information processing systems (pp. 873–880).Lewicki, M. S. & Olshausen, B. A. (1999). Probabilistic framework for the adaptation and compar-ison of image codes. JOSA A , 16(7), 1587–1601.Lewicki, M. S. & Sejnowski, T. J. (2000). Learning overcomplete representations. Neural compu-tation , 12(2), 337–365.Linsker, R. (1988). Self-Organization in a perceptual network. Computer , 21(3), 105–117.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2009). Online dictionary learning for sparse coding.InProceedings of the 26th annual international conference on machine learning (pp. 689–696).:ACM.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization andsparse coding. The Journal of Machine Learning Research , 11, 19–60.Montufar, G. F., Pascanu, R., Cho, K., & Bengio, Y . (2014). On the number of linear regions of deepneural networks. In Advances in Neural Information Processing Systems (pp. 2924–2932).Nair, V . & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807–814).Olshausen, B. A. & Field, D. J. (1996). Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature , 381(6583), 607–609.Olshausen, B. A. & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision Res. , 37(23), 3311–3325.Rao, C. R. (1945). Information and accuracy attainable in the estimation of statistical parameters.Bulletin of the Calcutta Mathematical Society , 37(3), 81–91.Shannon, C. (1948). A mathematical theory of communications. Bell System Technical Journal , 27,379–423 and 623–656.Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout:A simple way to prevent neural networks from overfitting. The Journal of Machine LearningResearch , 15(1), 1929–1958.10Published as a conference paper at ICLR 2017Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y ., & Manzagol, P.-A. (2010). Stacked denoisingautoencoders: Learning useful representations in a deep network with a local denoising criterion.The Journal of Machine Learning Research , 11, 3371–3408.Yarrow, S., Challis, E., & Series, P. (2012). Fisher and shannon information in finite neural popula-tions. Neural computation , 24(7), 1740–1780.APPENDIXA.1 F ORMULAS FOR APPROXIMATION OF MUTUAL INFORMATIONIt follows from I(X;R) =Dlnp(xjr)p(x)Er;xand Eq. (1) that the conditional entropy should read:H(XjR) =hlnp(xjr)ir;x'12lndetG(x)2ex. (A.1)The Fisher information matrix J(x)(see Eq. 3), which is symmetric and positive semidefinite, canbe written also asJ(x) =@lnp(rjx)@x@lnp(rjx)@xTrjx. (A.2)If we suppose p(rjx)is conditional independent, namely, p(rjx) =QNn=1p(rnjx;n), then wehave (see Huang & Zhang, 2017)J(x) =NZp()S(x;)d, (A.3)S(x;) =@lnp(rjx;)@x@lnp(rjx;)@xTrjx, (A.4)wherep()is the population density function of parameter ,p() =1NNXn=1(n), (A.5)and()denotes the Dirac delta function. It can be proved that the approximation function of MIIG[p()](Eq. 1) is concave about p()(Huang & Zhang, 2017). In Eq. (A.3), we can approximatethe continuous integral by a discrete summation for numerical computation,J(x)NK1Xk=1kS(x;k), (A.6)wherePK1k=1k= 1,k>0,k= 1;;K1,1K1N.For Poisson neuron model, by Eq. (A.4) we have (see Huang & Zhang, 2017)p(rjx;) =f(x;)rr!exp (f(x;)), (A.7)S(x;) =1f(x;)@f(x;)@x@f(x;)@xT=@g(x;)@x@g(x;)@xT, (A.8)wheref(x;)0is the activation function (mean response) of neuron andg(x;) = 2pf(x;). (A.9)11Published as a conference paper at ICLR 2017Similarly, for Gaussian noise model, we havep(rjx;) =1p2exp (rf(x;))222!, (A.10)S(x;) =12@f(x;)@x@f(x;)@xT, (A.11)where>0denotes the standard deviation of noise.Sometimes we do not know the specific form of p(x)and only know Msamples, x1,,xM,which are independent and identically distributed (i.i.d.) samples drawn from the distribution p(x).Then we can use the empirical average to approximate the integral in Eq. (1):IG12MXm=1ln (det ( G(xm))) +H(X). (A.12)A.2 P ROOF OF PROPOSITION 1Proof. It follows from the data-processing inequality (Cover & Thomas, 2006) thatI(X;R)I(Y;R)I(Y;R)I(Y;R), (A.13)I(X;R)I(X;Y)I(X;Y)I(X;Y). (A.14)Sincep(ykjx) =p(yk1;;ykNkjx) =N(wTkx; N1k2),k= 1;;K1, (A.15)we havep( yjx) =p( yjx), (A.16)p( y) =p( y), (A.17)I(X;Y) =I(X;Y). (A.18)Hence, by (A.14) and (A.18), expression (12) holds.On the other hand, when Nkis large, from Eq. (10) we know that the distribution of Zk, namely,N0,N1k2, approaches a Dirac delta function (zk). Then by (7) and (9) we have p(rj y)'p(rjy) =p(rjx)andI(X;R) =I(Y;R)lnp(rjy)p(rjx)r;x=I(Y;R), (A.19)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.20)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.21)I(X;Y) =I(X;Y)lnp(xj y)p(xjy)x;y; y'I(X;Y). (A.22)It follows from (A.13) and (A.19) that (11) holds. Combining (11), (12) and (A.20)–(A.22), weimmediately get (13) and (14). This completes the proof of Proposition 1 . A.3 H IERARCHICAL OPTIMIZATION FOR MAXIMIZING I(X;R)In the following, we will discuss the optimization procedure for maximizing I(X;R)in two stages:maximizing I(X;Y)and maximizing I(Y;R).12Published as a conference paper at ICLR 2017A.3.1 T HE1STSTAGEIn the first stage, our goal is to maximize the MI I(X;Y)and get the optimal parameters wk(k= 1;;K1). Assume that the stimulus xhas zero mean (if not, let x xhxix) andcovariance matrix x. It follows from eigendecomposition thatx=xxTx1M1XXT=UUT, (A.23)where X= [x1,,xM],U= [u1;;uK]2RKKis an unitary orthogonal matrix and =diag21;;2Kis a positive diagonal matrix with 1K>0. Define~ x=1=2UTx, (A.24)~ wk=1=2UTwk, (A.25)yk=~ wTk~ x, (A.26)wherek= 1;;K1. The covariance matrix of ~ xis given by~ x=D~ x~ xTE~ xIK, (A.27)andIKis aKKidentity matrix. From (1) and (A.11) we have I(X;Y) =I(~X;Y)andI(~X;Y)'I0G=12ln det ~G2e!!+H(~X), (A.28)~GN2K1Xk=1k~ wk~ wTk+IK. (A.29)The following approximations are useful (see Huang & Zhang, 2017):p(~ x)N (0;IK), (A.30)P(~ x) =@2lnp(~ x)@~ x@~ xTIK. (A.31)By the central limit theorem, the distribution of random variable ~Xis closer to a normal distribu-tion than the distribution of the original random variable X. On the other hand, the PCA modelsassume multivariate gaussian data whereas the ICA models assume multivariate non-gaussian data.Hence by a PCA-like whitening transformation (A.24) we can use the approximation (A.31) withthe Laplace’s method of asymptotic expansion, which only requires that the peak be close to itsmean while random variable ~Xneeds not be exactly Gaussian.Without any constraints on the Gaussian channel of neural populations, especially the peak firingrates, the capacity of this channel may grow indefinitely: I(~X;Y)! 1 . The most commonconstraint on the neural populations is an energy or power constraint which can also be regarded asa signal-to-noise ratio (SNR) constraint. The SNR for the output ynof the n-th neuron is given bySNRn=12DwTnx2Ex12~ wTn~ wn,n= 1;;N. (A.32)We require that1NNXn=1SNRn12K1Xk=1k~ wTk~ wk, (A.33)whereis a positive constant. Then by Eq. (A.28), (A.29) and (A.33), we have the followingoptimization problem:minimizeQ0G[^W] =12lndetN2^W^WT+IK, (A.34)subject toh= Tr^W^WTE0, (A.35)13Published as a conference paper at ICLR 2017where Tr ()denotes matrix trace and^W=~WA1=2=1=2UTWA1=2= [^ w1;;^ wK1], (A.36)A= diag (1;;K1), (A.37)W= [w1;;wK1], (A.38)~W= [~ w1;;~ wK1], (A.39)E=2. (A.40)HereEis a constant that does not affect the final optimal solution so we set E= 1. Then we obtainan optimal solution as follows:W=aU01=20VT0, (A.41)A=K11IK1, (A.42)a=qEK1K10=qK1K10, (A.43)0= diag21;;2K0, (A.44)U0=U(:;1:K0)2RKK0, (A.45)V0=V(:;1:K0)2RK1K0, (A.46)where V= [v1;;vK1]is anK1K1unitary orthogonal matrix, parameter K0represents thesize of the reduced dimension ( 1K0K), and its value will be determined below. Now theoptimal parameters wn(n= 1;;N) are clustered into K1classes (see Eq. A.6) and obey anuniform discrete distribution (see also Eq. A.60 in Appendix A.3.2).WhenK=K0=K1, the optimal solution of Win Eq. (A.41) is a whitening-like filter. WhenV=IK, the optimal matrix Wis the principal component analysis (PCA) whitening filters. In thesymmetrical case with V=U, the optimal matrix Wbecomes a zero component analysis (ZCA)whitening filter. If K <K 1, this case leads to an overcomplete solution, whereas when K0K1<K, the undercomplete solution arises. Since K0K1andK0K,Q0Gachieves its minimumwhenK0=K. However, in practice other factors may prevent it from reaching this minimum. Forexample, consider the average of squared weights,&=K1Xk=1kkwkk2= TrWAWT=EK0K0Xk=12k, (A.47)wherekkdenotes the Frobenius norm. The value of &is extremely large when any kbecomesvanishingly small. For real neurons these weights of connection are not allowed to be too large.Hence we impose a limitation on the weights: &E1, whereE1is a positive constant. This yieldsanother constraint on the objective function,~h=EK0K0Xk=12kE10. (A.48)From (A.35) and (A.48) we get the optimal K0= arg max ~K0E~K10P~K0k=12k. By this con-straint, small values of 2kwill often result in K0<K and a low-rank matrix W(Eq. A.41).On the other hand, the low-rank matrix Wcan filter out the noise of stimulus x. Consider thetransformation Y=WTXwithX= [x1,,xM]andY= [y1,,yM]forMsamples. Itfollows from the singular value decomposition (SVD) of XthatX=US~VT, (A.49)where Uis given in (A.23), ~Vis aMMunitary orthogonal matrix, Sis aKMdiagonal matrixwith non-negative real numbers on the diagonal, Sk;k=pM1k(k= 1;;K,KM), andSST= (M1). LetX=pM1U01=20~VT0X, (A.50)14Published as a conference paper at ICLR 2017where ~V0=~V(:;1:K0)2RMK0,0andU0are given in (A.44) and (A.45), respectively. ThenY=WTX=aV01=20UT0US~VT=WTX=apM1V0~VT0, (A.51)where Xcan be regarded as a denoised version of X. The determination of the effective rankK0Kof the matrix Xby using singular values is based on various criteria (Konstantinides &Yao, 1988). Here we choose K0as follows:K0= arg minK000@vuutPK00k=12kPKk=12k1A, (A.52)whereis a positive constant ( 0<1).Another advantage of a low-rank matrix Wis that it can significantly reduce overfitting for learningneural population parameters. In practice, the constraint (A.47) is equivalent to a weight-decay reg-ularization term used in many other optimization problems (Cortes & Vapnik, 1995; Hinton, 2010),which can reduce overfitting to the training data. To prevent the neural networks from overfitting,Srivastava et al. (2014) presented a technique to randomly drop units from the neural network dur-ing training, which may in fact be regarded as an attempt to reduce the rank of the weight matrixbecause the dropout can result in a sparser weights (lower rank matrix). This means that the updateis only concerned with keeping the more important components, which is similar to first performinga denoising process by the SVD low rank approximation.In this stage, we have obtained the optimal parameter W(see A.41). The optimal value of matrixV0can also be determined, as shown in Appendix A.3.3.A.3.2 T HE2NDSTAGEFor this stage, our goal is to maximize the MI I(Y;R)and get the optimal parameters ~k,k= 1;;K1. Here the input is y= (y1;;yK1)Tand the output r= (r1;;rN)Tisalso clustered into K1classes. The responses of Nkneurons in the k-th subpopulation obey a Pois-son distribution with mean ~f(eTky;~k), where ekis a unit vector with 1in thek-th element andyk=eTky. By (A.24) and (A.26), we havehykiyk= 0, (A.53)2yk=y2kyk=k~ wkk2. (A.54)Then for large N, by (1)–(4) and (A.30) we can use the following approximation,I(Y;R)'IF=12*ln det J(y)2e!!+y+H(Y), (A.55)whereJ(y) = diagN1jg01(y1)j2;;NK1g0K1(yK1)2, (A.56)g0k(yk) =@gk(yk)@yk,k= 1;;K1, (A.57)gk(yk) = 2q~f(yk;~k),k= 1;;K1. (A.58)It is easy to get thatIF=12K1Xk=1*ln Nkjg0k(yk)j22e!+y+H(Y)12K1Xk=1*ln jg0k(yk)j22e!+yK12lnK1N+H(Y), (A.59)15Published as a conference paper at ICLR 2017where the equality holds if and only ifk=1K1;k= 1;;K1, (A.60)which is consistent with Eq. (A.42).On the other hand, it follows from the Jensen’s inequality thatIF=*ln0@p(y)1det J(y)2e!1=21A+ylnZdet J(y)2e!1=2dy, (A.61)where the equality holds if and only if p(y)1detJ(y)1=2is a constant, which implies thatp(y) =detJ(y)1=2RdetJ(y)1=2dy=QK1k=1jg0k(yk)jRQK1k=1jg0k(yk)jdy. (A.62)From (A.61) and (A.62), maximizing ~IFyieldsp(yk) =jg0k(yk)jRjg0k(yk)jdyk,k= 1;;K1. (A.63)We assume that (A.63) holds, at least approximately. Hence we can let the peak of g0k(yk)be atyk=hykiyk= 0andy2kyk=2yk=k~ wkk2. Then combining (A.57), (A.61) and (A.63) we findthe optimal parameters ~kfor the nonlinear functions ~f(yk;~k),k= 1;;K1.A.3.3 T HEFINAL OBJECTIVE FUNCTIONIn the preceding sections we have obtained the initial optimal solutions by maximizing IX;YandI(Y;R). In this section, we will discuss how to find the final optimal V0and other parametersby maximizing I(X;R)from the initial optimal solutions.First, we havey=~WT~ x=a^ y, (A.64)whereais given in (A.43) and^ y= (^y1;;^yK1)T=CT^ x=CT x, (A.65)^ x=1=20UT0x, (A.66)C=VT02RK0K1, (A.67) x=U01=20UT0x=U0^ x, (A.68)C=U0C= [ c1;; cK1]. (A.69)It follows thatI(X;R) =I~X;R'~IG=12lndetG(^ x)2e^ x+H(~X), (A.70)G(^ x) =N^W^^WT+IK, (A.71)^W=1=2UTWA1=2=aqK11IKK0C=qK10IKK0C, (A.72)16Published as a conference paper at ICLR 2017where IKK0is aKK0diagonal matrix with value 1on the diagonal and^=2, (A.73)= diag ((^y1);;(^yK1)), (A.74)(^yk) =a1@gk(^yk)@^yk, (A.75)gk(^yk) = 2q~f(^yk;~k), (A.76)^yk=a1yk=cTk^ x,k= 1;;K1. (A.77)Then we havedet (G(^ x)) = detNK10C^CT+IK0. (A.78)For largeNandK0=N!0, we havedet (G(^ x))det (J(^ x)) = detNK10C^CT, (A.79)~IG~IF=QK2ln (2e)K02ln (") +H(~X), (A.80)Q=12DlndetC^CTE^ x, (A.81)"=K0N. (A.82)Hence we can state the optimization problem as:minimizeQ[C] =12DlndetC^CTE^ x, (A.83)subject to CCT=IK0. (A.84)The gradient from (A.83) is given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (A.85)where C= [c1;;cK1],!= (!1;;!K1)T, and!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1. (A.86)In the following we will discuss how to get the optimal solution of Cfor two specific cases.A.4 A LGORITHMS FOR OPTIMIZATION OBJECTIVE FUNCTIONA.4.1 A LGORITHM 1:K0=K1NowCCT=CTC=IK1, then by Eq. (A.83) we haveQ1[C] =*K1Xk=1ln ((^yk))+^ x, (A.87)dQ1[C]dC=^ x!T^ x, (A.88)!k=0(^yk)(^yk),k= 1;;K1. (A.89)Under the orthogonality constraints (Eq. A.84), we can use the following update rule for learning C(Edelman et al., 1998; Amari, 1999):Ct+1=Ct+tdCtdt, (A.90)dCtdt=dQ1[Ct]dCt+CtdQ1[Ct]dCtTCt, (A.91)17Published as a conference paper at ICLR 2017where the learning rate parameter tchanges with the iteration count t,t= 1;;tmax. Here wecan use the empirical average to approximate the integral in (A.88) (see Eq. A.12). We can alsoapply stochastic gradient descent (SGD) method for online updating of Ct+1in (A.90).The orthogonality constraint (Eq. A.84) can accelerate the convergence rate. In practice, the orthog-onal constraint (A.84) for objective function (A.83) is not strictly necessary in this case. We cancompletely discard this constraint condition and considerminimizeQ2[C] =*K1Xk=1ln ((^yk))+^ x12lndetCTC, (A.92)where we assume rank ( C) =K1=K0. If we letdCdt=CCTdQ2[C]dC, (A.93)thenTrdQ2[C]dCdCTdt=TrCTdQ2[C]dCdQ2[C]dCTC0. (A.94)Therefore we can use an update rule similar to Eq. A.90 for learning C. In fact, the method can alsobe extended to the case K0>K 1by using the same objective function (A.92).The learning rate parameter t(see A.90) is updated adaptively, as follows. First, calculatet=vtt,t= 1;;tmax, (A.95)t=1K1K1Xk=1krCt(:;k)kkCt(:;k)k, (A.96)andCt+1by (A.90) and (A.91), then calculate the value Q1Ct+1. IfQ1Ct+1<Q 1[Ct], thenletvt+1 vt, continue for the next iteration; otherwise, let vt vt,t vt=tand recalculateCt+1andQ1Ct+1. Here 0< v1<1and0< < 1are set as constants. After getting Ct+1for each update, we employ a Gram–Schmidt orthonormalization process for matrix Ct+1, wherethe orthonormalization process can accelerate the convergence. However, we can discard the Gram–Schmidt orthonormalization process after iterative t0(>1) epochs for more accurate optimizationsolution C. In this case, the objective function is given by the Eq. (A.92). We can also furtheroptimize parameter bby gradient descent.WhenK0=K1, the objective function Q2[C]in Eq. (A.92) without constraint is the same as theobjective function of infomax ICA (IICA) (Bell & Sejnowski, 1995; 1997), and as a consequencewe should get the same optimal solution C. Hence, in this sense, the IICA may be regarded as aspecial case of our method. Our method has a wider range of applications and can handle moregeneric situations. Our model is derived by neural populations with a huge number of neurons and itis not restricted to additive noise model. Moreover, our method has a faster convergence rate duringtraining than IICA (see Section 3).A.4.2 A LGORITHM 2:K0K1In this case, it is computationally expensive to update Cby using the gradient of Q(see Eq. A.85),since it needs to compute the inverse matrix for every ^ x. Here we provide an alternative method forlearning the optimal C. First, we consider the following inequalities.18Published as a conference paper at ICLR 2017Proposition 2. The following inequations hold,12DlndetC^CTE^ x12lndetCD^E^ xCT, (A.97)lndetCCT^ xlndetChi^ xCT(A.98)12lndetChi2^ xCT(A.99)12lndetCD^E^ xCT, (A.100)lndetCCT12lndetC^CT, (A.101)where C2RK0K1,K0K1, andCCT=IK0.Proof. Functions lndetCD^E^ xCTandlndetChi^ xCTare concave functions aboutp(^ x)(see the proof of Proposition 5.2. in Huang & Zhang, 2017), which fact establishes inequalities(A.97) and (A.98).Next we will prove the inequality (A.101). By SVD, we haveC=UDVT, (A.102)where Uis aK0K0unitary orthogonal matrix, V= [ v1; v2;; vK1]is anK1K1unitaryorthogonal matrix, and Dis anK0K1rectangular diagonal matrix with K0positive real numberson the diagonal. By the matrix Hadamard’s inequality and Cauchy–Schwarz inequality we havedetCCTCCTdetC^CT1= detDVTCTCVDTDDT1= detVT1CTCV1= detCV12K0Yk=1CV12k;kK0Yk=1CCT2k;kVT1V12k;k= 1, (A.103)where V1= [ v1; v2;; vK0]2RK1K0. The last equality holds because of CCT=IK0andVT1V1=IK0. This establishes inequality (A.101) and the equality holds if and only if K0=K1orCV1=IK0.Similarly, we get inequality (A.99):lndetChi^ xCT12lndetChi2^ xCT. (A.104)By Jensen’s inequality, we haveh(^yk)i2^ xD(^yk)2E^ x,8k= 1;;K1. (A.105)Then it follows from (A.105) that inequality (A.100) holds:12lndetChi2^ xCT12lndetCD^E^ xCT. (A.106)19Published as a conference paper at ICLR 2017This completes the proof of Proposition 2 . ByProposition 2, ifK0=K1then we get12Dlndet^E^ x12lndetD^E^ x, (A.107)hln (det ( ))i^ xln (det (hi^ x)) (A.108)=12lndethi2^ x(A.109)12lndetD^E^ x, (A.110)ln (det ( )) =12lndet^. (A.111)On the other hand, it follows from (A.81) and Proposition 2 thatlndetCCT^ xQ12lndetCD^E^ xCT, (A.112)lndetCCT^ x^Q12lndetCD^E^ xCT. (A.113)Hence we can see that ^Qis close toQ(see A.81). Moreover, it follows from the Cauchy–Schwarzinequality thatD()k;kE^ x=h(^yk)i^ykZ(^yk)2d^ykZp(^yk)2d^yk1=2, (A.114)wherek= 1;;K1, the equality holds if and only if the following holds:p(^yk) =(^yk)R(^yk)d^yk,k= 1;;K1, (A.115)which is the similar to Eq. (A.63).SinceI(X;R) =I(Y;R)(seeProposition 1), by maximizing I(X;R)we hope the equality ininequality (A.61) and equality (A.63) hold, at least approximatively. On the other hand, letCopt= arg minCQ[C] = arg maxCDlndet(C^CT)E^ x, (A.116)^Copt= arg minC^Q[C] = arg maxClndetChi2^ xCT, (A.117)Coptand^Coptmake (A.63) and (A.115) to hold true, which implies that they are the same optimalsolution: Copt=^Copt.Therefore, we can use the following objective function ^Q[C]as a substitute for Q[C]and write theoptimization problem as:minimize ^Q[C] =12lndetChi2^ xCT, (A.118)subject to CCT=IK0. (A.119)The update rule (A.90) may also apply here and a modified algorithm similar to Algorithm 1 maybe used for parameter learning.A.5 S UPPLEMENTARY EXPERIMENTSA.5.1 Q UANTITATIVE METHODS FOR COMPARISONTo quantify the efficiency of learning representations by the above algorithms, we calculate the co-efficient entropy (CFE) for estimating coding cost as follows (Lewicki & Olshausen, 1999; Lewicki& Sejnowski, 2000):yk= wTk x,k= 1;;K1, (A.120)=K1PK1k=1k wkk, (A.121)20Published as a conference paper at ICLR 2017where xis defined by Eq. (A.68), and wkis the corresponding optimal filter. To estimate theprobability density of coefficients qk(yk)(k= 1;;K1) from theMtraining samples, we applythe kernel density estimation for qk(yk)and use a normal kernel with an adaptive optimal windowwidth. Then we define the CFE hash=1K1K1Xk=1Hk(Yk), (A.122)Hk(Yk) =Pnqk(n) log2qk(n), (A.123)whereqk(yk)is quantized as discrete qk(n)andis the step size.Methods such as IICA and SRBM as well as our methods have feedforward structures in whichinformation is transferred directly through a nonlinear function, e.g., the sigmoid function. Wecan use the amount of transmitted information to measure the results learned by these methods.Consider a neural population with Nneurons, which is a stochastic system with nonlinear transferfunctions. We chose a sigmoidal transfer function and Gaussian noise with standard deviation set to1as the system noise. In this case, from (1), (A.8) and (A.11), we see that the approximate MI IGisequivalent to the case of the Poisson neuron model. It follows from (A.70)–(A.82) thatI(X;R) =I~X;R=H(~X)H~XjR'~IG=H(~X)h1, (A.124)H~XjR'h1=12lndet12eNK10C^CT+IK0^ x, (A.125)where we set N= 106. A good representation should make the MI I(X;R)as big as possible.Equivalently, for the same inputs, a good representation should make the conditional entropy (CDE)H~XjR(orh1) as small as possible.(a) (b) (c)(d) (e) (f)Figure 4: Comparison of basis vectors obtained by our method and other methods. Panel ( a)–(e)correspond to panel ( a)–(e) in Figure 2, where the basis vectors are given by (A.130). The basisvectors in panel ( f) are learned by MBDL and given by (A.127).21Published as a conference paper at ICLR 2017A.5.2 C OMPARISON OF BASIS VECTORSWe compared our algorithm with an up-to-date sparse coding algorithm, the mini-batch dictionarylearning (MBDL) as given in (Mairal et al., 2009; 2010) and integrated in Python library, i.e. scikit-learn. The input data was the same as the above, i.e. 105nature image patches preprocessed by theZCA whitening filters.We denotes the optimal dictionary learned by MBDL as B2RKK1for which each columnrepresents a basis vector. Now we havexU1=2UTBy=~By, (A.126)~B=U1=2UTB, (A.127)where y= (y1;;yK1)Tis the coefficient vector.Similarly, we can obtain a dictionary from the filter matrix C. Suppose rank ( C) =K0K1, thenit follows from (A.64) that^ x=aCCT1Cy. (A.128)By (A.66) and (A.128), we getxBy=aBCT1=20UT0x, (A.129)B=a1U01=20CCT1C= [b1;;bK1], (A.130)where y=WTx=aCT1=20UT0x, the vectors b1;;bK1can be regarded as the basis vectorsand the strict equality holds when K0=K1=K. Recall that X= [x1,,xM] =US~VT(see Eq. A.49) and Y= [y1,,yM] =WTX=apM1CT~VT0, then we get X=BY =pM1U01=20~VT0X. Hence, Eq. (A.129) holds.The basis vectors shown in Figure 4(a)–4(e) correspond to filters in Figure 2(a)–2(e). And Fig-ure 4(f) illustrates the optimal dictionary ~Blearned by MBDL, where we set the regularization pa-rameter as= 1:2=pK, the batch size as 50and the total number of iterations to perform as 20000 ,which took about 3hours for training. From Figure 4 we see that these basis vectors obtained by theabove algorithms have local Gabor-like shapes except for those by SRBM. If rank( B) =K=K1,then the matrix BTcan be regarded as a filter matrix like matrix C(see Eq. A.69). However,from the column vector of matrix BTwe cannot find any local Gabor-like filter that resembles thefilters shown in Figure 2. Our algorithm has less computational cost and a much faster convergencerate than the sparse coding algorithm. Moreover, the sparse coding method involves a dynamicgenerative model that requires relaxation and is therefore unsuitable for fast inference, whereas thefeedforward framework of our model is easy for inference because it only requires evaluating thenonlinear tuning functions.A.5.3 L EARNING OVERCOMPLETE BASESWe have trained our model on the Olshausen’s nature image patches with a highly overcompletesetup by optimizing the objective (A.118) by Alg.2 and got Gabor-like filters. The results of 400typical filters chosen from 1024 output filters are displayed in Figure 5(a) and corresponding base(see Eq. A.130) are shown in Figure 5(b). Here the parameters are K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8, and= 0:98(see A.52), from which we got rank ( B) =K0= 82 . Comparedto the ICA-like results in Figure 2(a)–2(c), the average size of Gabor-like filters in Figure 5(a) isbigger, indicating that the small noise-like local structures in the images have been filtered out.We have also trained our model on 60,000 images of handwritten digits from MNIST dataset (LeCunet al., 1998) and the resultant 400typical optimal filters and bases are shown in Figure 5(c) andFigure 5(d), respectively. All parameters were the same as Figure 5(a) and Figure 5(b): K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8and= 0:98, from which we got rank ( B) =K0= 183 . Fromthese figures we can see that the salient features of the input images are reflected in these filters andbases. We could also get the similar overcomplete filters and bases by SRBM and MBDL. However,the results depended sensitively on the choice of parameters and the training took a long time.22Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 5: Filters and bases obtained from Olshausen’s image dataset and MNIST dataset by Al-gorithm 2. ( a) and ( b):400typical filters and the corresponding bases obtained from Olshausen’simage dataset, where K0= 82 andK1= 1024 . (c) and ( d):400typical filters and the correspondingbases obtained from the MNIST dataset, where K0= 183 andK1= 1024 .Figure 6 shows that CFE as a function of training time for Alg.2, where Figure 6(a) corresponds toFigure 5(a)-5(b) for learning nature image patches and Figure 6(b) corresponds to Figure 5(c)-5(d)for learning MNIST dataset. We set parameters tmax= 100 and= 0:8for all experiments andvaried parameter v1for each experiment, with v1= 0:2,0:4,0:6or0:8. These results indicate a fastconvergence rate for training on different datasets. Generally, the convergence is insensitive to thechange of parameter v1.We have also performed additional tests on other image datasets and got similar results, confirmingthe speed and robustness of our learning method. Compared with other methods, e.g., IICA, FICA,MBDL, SRBM or sparse autoencoders etc., our method appeared to be more efficient and robust forunsupervised learning of representations. We also found that complete and overovercomplete filtersand bases learned by our methods had local Gabor-like shapes while the results by SRBM or MBDLdid not have this property.23Published as a conference paper at ICLR 2017100101102time (seconds)1.751.81.851.91.95coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8(a)100101102time (seconds)1.61.71.81.922.1coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8 (b)Figure 6: CFE as a function of training time for Alg.2, with v1= 0:2,0:4,0:6or0:8. In allexperiments parameters were set to tmax= 100 ,t0= 50 and= 0:8. (a): corresponding toFigure 5(a) or Figure 5(b). ( b): corresponding to Figure 5(c) or Figure 5(d).A.5.4 I MAGE DENOISINGSimilar to the sparse coding method applied to image denoising (Elad & Aharon, 2006), our method(see Eq. A.130) can also be applied to image denoising, as shown by an example in Figure 7. Thefilters or bases were learned by using 77image patches sampled from the left half of the image, andsubsequently used to reconstruct the right half of the image which was distorted by Gaussian noise.A common practice for evaluating the results of image denoising is by looking at the differencebetween the reconstruction and the original image. If the reconstruction is perfect the differenceshould look like Gaussian noise. In Figure 7(c) and 7(d) a dictionary ( 100bases) was learned byMBDL and orthogonal matching pursuit was used to estimate the sparse solution.1For our method(shown in Figure 7(b)), we first get the optimal filters parameter W, a low rank matrix ( K0<K ),then from the distorted image patches xm(m= 1;;M) we get filter outputs ym=WTxmand the reconstruction xm=Bym(parameters: = 0:975andK0=K1= 14 ). As can be seenfrom Figure 7, our method worked better than dictionary learning, although we only used 14basescompared with 100bases used by dictionary learning. Our method is also more efficient. We canget better optimal bases Bby a generative model using our infomax approach (details not shown).1Python source code is available at http://scikit-learn.org/stable/ downloads/plot image denoising.py24Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: Image denoising. ( a): the right half of the original image is distorted by Gaussian noiseand the norm of the difference between the distorted image and the original image is 23:48. (b):image denoising by our method (Algorithm 1), with 14bases used. ( c) and ( d): image denoisingusing dictionary learning, with 100bases used.25
HypYOCbEe
SkYbF1slg
ICLR.cc/2017/conference/-/paper549/official/review
{"title": "Review of \"information theoretic framework\"", "rating": "7: Good paper, accept", "review": "This is an 18 page paper plus appendix which presents a mathematical derivation for infomax for an actual neural population with noise. The original Bell & Sejnowski infomax framework only considered the no noise case. Results are shown for natural image patches and the mnist dataset, which qualitatively resemble results obtained with other methods.\n\nThis seems like an interesting and potentially more general approach to unsupervised learning. However the paper is quite long and it was difficult for me to follow all the twists and turns. For example the introduction of the hierarchical model was confusing and it took several iterations to understand where this was going. 'Hierarchical' is probably not the right terminology here because it's not like a deep net hierarchy, it's just decomposing the tuning curve function into different parts. I would recommend that the authors try to condense the paper so that the central message and important steps are conveyed in short order, and then put the more complete mathematical development into a supplementary document.\n\nAlso, the authors should look at the work of Karklin & Simoncelli 2011 which is highly related. They also use an infomax framework for a noisy neural population to derive on and off cells in the retina, and they show the conditions under which orientation selectivity emerges.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax
["Wentao Huang", "Kechen Zhang"]
A framework is presented for unsupervised learning of representations based on infomax principle for large-scale neural populations. We use an asymptotic approximation to the Shannon's mutual information for a large neural population to demonstrate that a good initial approximation to the global information-theoretic optimum can be obtained by a hierarchical infomax method. Starting from the initial solution, an efficient algorithm based on gradient descent of the final objective function is proposed to learn representations from the input datasets, and the method works for complete, overcomplete, and undercomplete bases. As confirmed by numerical experiments, our method is robust and highly efficient for extracting salient features from input datasets. Compared with the main existing methods, our algorithm has a distinct advantage in both the training speed and the robustness of unsupervised representation learning. Furthermore, the proposed method is easily extended to the supervised or unsupervised model for training deep structure networks.
["Unsupervised Learning", "Theory", "Deep learning"]
https://openreview.net/forum?id=SkYbF1slg
https://openreview.net/pdf?id=SkYbF1slg
https://openreview.net/forum?id=SkYbF1slg&noteId=HypYOCbEe
Published as a conference paper at ICLR 2017ANINFORMATION -THEORETIC FRAMEWORK FORFAST AND ROBUST UNSUPERVISED LEARNING VIANEURAL POPULATION INFOMAXWentao Huang & Kechen ZhangDepartment of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimore, MD 21205, USAfwhuang21,kzhang4 g@jhmi.eduABSTRACTA framework is presented for unsupervised learning of representations based oninfomax principle for large-scale neural populations. We use an asymptotic ap-proximation to the Shannon’s mutual information for a large neural population todemonstrate that a good initial approximation to the global information-theoreticoptimum can be obtained by a hierarchical infomax method. Starting from theinitial solution, an efficient algorithm based on gradient descent of the final ob-jective function is proposed to learn representations from the input datasets, andthe method works for complete, overcomplete, and undercomplete bases. As con-firmed by numerical experiments, our method is robust and highly efficient forextracting salient features from input datasets. Compared with the main existingmethods, our algorithm has a distinct advantage in both the training speed and therobustness of unsupervised representation learning. Furthermore, the proposedmethod is easily extended to the supervised or unsupervised model for trainingdeep structure networks.1 I NTRODUCTIONHow to discover the unknown structures in data is a key task for machine learning. Learning goodrepresentations from observed data is important because a clearer description may help reveal theunderlying structures. Representation learning has drawn considerable attention in recent years(Bengio et al., 2013). One category of algorithms for unsupervised learning of representations isbased on probabilistic models (Lewicki & Sejnowski, 2000; Hinton & Salakhutdinov, 2006; Leeet al., 2008), such as maximum likelihood (ML) estimation, maximum a posteriori (MAP) probabil-ity estimation, and related methods. Another category of algorithms is based on reconstruction erroror generative criterion (Olshausen & Field, 1996; Aharon et al., 2006; Vincent et al., 2010; Mairalet al., 2010; Goodfellow et al., 2014), and the objective functions usually involve squared errors withadditional constraints. Sometimes the reconstruction error or generative criterion may also have aprobabilistic interpretation (Olshausen & Field, 1997; Vincent et al., 2010).Shannon’s information theory is a powerful tool for description of stochastic systems and couldbe utilized to provide a characterization for good representations (Vincent et al., 2010). However,computational difficulties associated with Shannon’s mutual information (MI) (Shannon, 1948) havehindered its wider applications. The Monte Carlo (MC) sampling (Yarrow et al., 2012) is a conver-gent method for estimating MI with arbitrary accuracy, but its computational inefficiency makes itunsuitable for difficult optimization problems especially in the cases of high-dimensional input stim-uli and large population networks. Bell and Sejnowski (Bell & Sejnowski, 1995; 1997) have directlyapplied the infomax approach (Linsker, 1988) to independent component analysis (ICA) of data withindependent non-Gaussian components assuming additive noise, but their method requires that thenumber of outputs be equal to the number of inputs. The extensions of ICA to overcomplete orundercomplete bases incur increased algorithm complexity and difficulty in learning of parameters(Lewicki & Sejnowski, 2000; Kreutz-Delgado et al., 2003; Karklin & Simoncelli, 2011).1Published as a conference paper at ICLR 2017Since Shannon MI is closely related to ML and MAP (Huang & Zhang, 2017), the algorithms ofrepresentation learning based on probabilistic models should be amenable to information-theoretictreatment. Representation learning based on reconstruction error could be accommodated also byinformation theory, because the inverse of Fisher information (FI) is the Cram ́er-Rao lower boundon the mean square decoding error of any unbiased decoder (Rao, 1945). Hence minimizing thereconstruction error potentially maximizes a lower bound on the MI (Vincent et al., 2010).Related problems arise also in neuroscience. It has long been suggested that the real nervous sys-tems might approach an information-theoretic optimum for neural coding and computation (Barlow,1961; Atick, 1992; Borst & Theunissen, 1999). However, in the cerebral cortex, the number of neu-rons is huge, with about 105neurons under a square millimeter of cortical surface (Carlo & Stevens,2013). It has often been computationally intractable to precisely characterize information codingand processing in large neural populations.To address all these issues, we present a framework for unsupervised learning of representationsin a large-scale nonlinear feedforward model based on infomax principle with realistic biologicalconstraints such as neuron models with Poisson spikes. First we adopt an objective function basedon an asymptotic formula in the large population limit for the MI between the stimuli and the neuralpopulation responses (Huang & Zhang, 2017). Since the objective function is usually nonconvex,choosing a good initial value is very important for its optimization. Starting from an initial value, weuse a hierarchical infomax approach to quickly find a tentative global optimal solution for each layerby analytic methods. Finally, a fast convergence learning rule is used for optimizing the final objec-tive function based on the tentative optimal solution. Our algorithm is robust and can learn complete,overcomplete or undercomplete basis vectors quickly from different datasets. Experimental resultsshowed that the convergence rate of our method was significantly faster than other existing methods,often by an order of magnitude. More importantly, the number of output units processed by ourmethod can be very large, much larger than the number of inputs. As far as we know, no existingmodel can easily deal with this situation.2 M ETHODS2.1 A PPROXIMATION OF MUTUAL INFORMATION FOR NEURAL POPULATIONSSuppose the input xis aK-dimensional vector, x= (x1;;xK)T, the outputs of Nneurons aredenoted by a vector, r= (r1;;rN)T, where we assume Nis large, generally NK. Wedenote random variables by upper case letters, e.g., random variables XandR, in contrast to theirvector values xandr. The MI between XandRis defined by I(X;R) =Dlnp(xjr)p(x)Er;x, wherehir;xdenotes the expectation with respect to the probability density function (PDF) p(r;x).Our goal is to maxmize MI I(X;R)by finding the optimal PDF p(rjx)under some constraintconditions, assuming that p(rjx)is characterized by a noise model and activation functions f(x;n)with parameters nfor then-th neuron (n= 1;;N). In other words, we optimize p(rjx)bysolving for the optimal parameters n. Unfortunately, it is intractable in most cases to solve for theoptimal parameters that maximizes I(X;R). However, if p(x)andp(rjx)are twice continuouslydifferentiable for almost every x2RK, then for large Nwe can use an asymptotic formula toapproximate the true value of I(X;R)with high accuracy (Huang & Zhang, 2017):I(X;R)'IG=12lndetG(x)2ex+H(X), (1)where det ()denotes the matrix determinant and H(X) =hlnp(x)ixis the stimulus entropy,G(x) =J(x) +P(x), (2)J(x) =@2lnp(rjx)@x@xTrjx, (3)P(x) =@2lnp(x)@x@xT. (4)Assuming independent noises in neuronal responses, we have p(rjx) =QNn=1p(rnjx;n),and the Fisher information matrix becomes J(x)NPK1k=1kS(x;k), where S(x;k) =2Published as a conference paper at ICLR 2017D@lnp(rjx;k)@x@lnp(rjx;k)@xTErjxandk>0(k= 1;;K1) is the population density of param-eterk, withPK1k=1k= 1, and 1K1N(see Appendix A.1 for details). Since the cerebralcortex usually forms functional column structures and each column is composed of neurons with thesame properties (Hubel & Wiesel, 1962), the positive integer K1can be regarded as the number ofdistinct classes in the neural population.Therefore, given the activation function f(x;k), our goal becomes to find the optimal popula-tion distribution density kof parameter vector kso that the MI between the stimulus xand theresponse ris maximized. By Eq. (1), our optimization problem can be stated as follows:minimizeQG[fkg] =12hln (det ( G(x)))ix, (5)subject toK1Xk=1k= 1,k>0,8k= 1;;K1. (6)SinceQG[fkg]is a convex function of fkg(Huang & Zhang, 2017), we can readily find theoptimal solution for small Kby efficient numerical methods. For large K, however, finding anoptimal solution by numerical methods becomes intractable. In the following we will propose analternative approach to this problem. Instead of directly solving for the density distribution fkg, weoptimize the parameters fkgandfkgsimultaneously under a hierarchical infomax framework.2.2 H IERARCHICAL INFOMAXFor clarity, we consider neuron model with Poisson spikes although our method is easily applicableto other noise models. The activation function f(x;n)is generally a nonlinear function, such assigmoid and rectified linear unit (ReLU) (Nair & Hinton, 2010). We assume that the nonlinearfunction for the n-th neuron has the following form: f(x;n) =~f(yn;~n), whereyn=wTnx. (7)withwnbeing aK-dimensional weights vector, ~f(yn;~n)is a nonlinear function, n= (wTn;~Tn)Tand~nare the parameter vectors ( n= 1;;N).In general, it is very difficult to find the optimal parameters, n,n= 1;;N, for the followingreasons. First, the number of output neurons Nis very large, usually NK. Second, the activationfunctionf(x;n)is a nonlinear function, which usually leads to a nonconvex optimization problem.For nonconvex optimization problems, the selection of initial values often has a great influence onthe final optimization results. Our approach meets these challenges by making better use of the largenumber of neurons and by finding good initial values by a hierarchical infomax method.We divide the nonlinear transformation into two stages, mapping first from xtoyn(n= 1;;N),and then from ynto~f(yn;~n), whereyncan be regarded as the membrane potential of the n-thneuron, and ~f(yn;~n)as its firing rate. As with the real neurons, we assume that the membranepotential is corrupted by noise:Yn=Yn+Zn, (8)whereZn N0,2is a normal distribution with mean 0and variance 2. Then the meanmembrane potential of the k-th class subpopulation with Nk=Nkneurons is given byYk=1NkNkXn=1Ykn=Yk+Zk,k= 1;;K1, (9)ZkN(0; N1k2). (10)Define vectors y= (y1;;yN)T, y= (y1;;yK1)Tandy= (y1;;yK1)T, whereyk=wTkx(k= 1;;K1). Notice that yn(n= 1;;N) is also divided into K1classes, the sameas forrn. If we assume f(x;k) = ~f(yk;~k), i.e. assuming an additive Gaussian noise for yn(see Eq. 9), then the random variables X,Y,Y,YandRform a Markov chain, denoted byX!Y!Y!Y!R(see Figure 1), and we have the following proposition (see AppendixA.2).3Published as a conference paper at ICLR 2017X Y R Y YW X Y + Z( T1/N k f( )Yxxxy-y-y-ym1(ymNk(ymNkK1rmNkymi(rN1 yN1(yN(rN yNyni(yn1(yn1yniymiym1yN1ri yiy1yi(y1(rnirn1rmirm1r11kKk1Figure 1: A neural network interpretaton for random variables X,Y,Y,Y,R.Proposition 1. With the random variables X,Y,Y,Y,Rand Markov chain X!Y!Y!Y!R, the following equations hold,I(X;R) =I(Y;R)I(Y;R)I(Y;R), (11)I(X;R)I(X;Y) =I(X;Y)I(X;Y), (12)and for large Nk(k= 1;;K1),I(Y;R)'I(Y;R)'I(Y;R) =I(X;R), (13)I(X;Y)'I(X;Y) =I(X;Y). (14)A major advantage of incorporating membrane noise is that it facilitates finding the optimal solutionby using the infomax principle. Moreover, the optimal solution obtained this way is more robust;that is, it discourages overfitting and has a strong ability to resist distortion. With vanishing noise2!0, we have Yk!Yk,~f(yk;~k)'~f(yk;~k) =f(x;k), so that Eqs. (13) and (14) hold asin the case of large Nk.To optimize MI I(Y;R), the probability distribution of random variable Y,p(y), needs to be de-termined, i.e. maximizing I(Y;R)aboutp(y)under some constraints should yield an optimaldistribution: p(y) = arg max p(y)I(Y;R). LetC= maxp(y)I(Y;R)be the channel capacity ofneural population coding, and we always have I(X;R)C (Huang & Zhang, 2017). To find asuitable linear transformation from XtoYthat is compatible with this distribution p(y), a reason-able choice is to maximize I(X;Y) (I(X;Y)), where Yis a noise-corrupted version of Y. Thisimplies minimum information loss in the first transformation step. However, there may exist manytransformations from XtoYthat maximize I(X;Y)(see Appendix A.3.1). Ideally, if we can finda transformation that maximizes both I(X;Y)andI(Y;R)simultaneously, then I(X;R)reachesits maximum value: I(X;R) = maxp(y)I(Y;R) =C.From the discussion above we see that maximizing I(X;R)can be divided into two steps,namely, maximizing I(X;Y)and maximizing I(Y;R). The optimal solutions of maxI(X;Y)andmaxI(Y;R)will provide a good initial approximation that tend to be very close to the optimalsolution of maxI(X;R).Similarly, we can extend this method to multilayer neural population networks. For example, a two-layer network with outputs R(1)andR(2)form a Markov chain, X!~R(1)!R(1)!R(1)!4Published as a conference paper at ICLR 2017R(2), where random variable ~R(1)is similar to Y, random variable R(1)is similar to Y, and R(1)is similar to Yin the above. Then we can show that the optimal solution of maxI(X;R(2))canbe approximated by the solutions of maxI(X;R(1))andmaxI(~R(1);R(2)), withI(~R(1);R(2))'I(R(1);R(2)).More generally, consider a highly nonlinear feedforward neural network that maps the input xtooutput z, with z=F(x;) =hLh1(x), wherehl(l= 1;;L) is a linear or nonlinearfunction (Montufar et al., 2014). We aim to find the optimal parameter by maximizing I(X;Z). Itis usually difficult to solve the optimization problem when there are many local extrema for F(x;).However, if each function hlis easy to optimize, then we can use the hierarchical infomax methoddescribed above to get a good initial approximation to its global optimization solution, and go fromthere to find the final optimal solution. This information-theoretic consideration from the neuralpopulation coding point of view may help explain why deep structure networks with unsupervisedpre-training have a powerful ability for learning representations.2.3 T HEOBJECTIVE FUNCTIONThe optimization processes for maximizing I(X;Y)and maximizing I(Y;R)are discussed in detailin Appendix A.3. First, by maximizing I(X;Y)(see Appendix A.3.1 for details), we can get theoptimal weight parameter wk(k= 1;;K1, see Eq. 7) and its population density k(see Eq. 6)which satisfyW= [w1;;wK1] =aU01=20C, (15)1==K1=K11, (16)wherea=qK1K10,C= [c1;;cK1]2RK0K1,CCT=IK0,IK0is aK0K0identitymatrix with integer K02[1;K], the diagonal matrix 02RK0K0and matrix U02RKK0aregiven in (A.44) and (A.45), with K0given by Eq. (A.52). Matrices 0andU0can be obtainedbyandUwithUT0U0=IK0andU00UT0UUTxxTx(see Eq. A.23). Theoptimal weight parameter wk(15) means that the input variable xmust first undergo a whitening-like transformation ^ x=1=20UT0x, and then goes through the transformation y=aCT^ x, withmatrix Cto be optimized below. Note that weight matrix Wsatisfies rank(W) = min(K0;K1),which is a low rank matrix, and its low dimensionality helps reduce overfitting during training (seeAppendix A.3.1).By maximizing I(Y;R)(see Appendix A.3.2), we further solve the the optimal parameters ~kforthe nonlinear functions ~f(yk;~k),k= 1;;K1. Finally, the objective function for our optimiza-tion problem (Eqs. 5 and 6) turns into (see Appendix A.3.3 for details):minimizeQ[C] =12DlndetC^CTE^ x, (17)subject to CCT=IK0, (18)where ^= diag(^y1)2;;(^yK1)2,(^yk) =a1j@gk(^yk)=@^ykj(k= 1;;K1),gk(^yk) =2q~f(^yk;~k),^yk=a1yk=cTk^ x, and^ x=1=20UT0x. We apply the gradient descent method tooptimize the objective function, with the gradient of Q[C]given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (19)where!= (!1;;!K1)T,!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1.WhenK0=K1(orK0> K 1), the objective function Q[C]can be reduced to a simpler form,and its gradient is also easy to compute (see Appendix A.4.1). However, when K0< K 1, it iscomputationally expensive to update Cby applying the gradient of Q[C]directly, since it requiresmatrix inversion for every ^ x. We use another objective function ^Q[C](see Eq. A.118) which is anapproximation to Q[C], but its gradient is easier to compute (see Appendix A.4.2). The function5Published as a conference paper at ICLR 2017^Q[C]is the approximation of Q[C], ideally they have the same optimal solution for the parameterC.Usually, for optimizing the objective in Eq. 17, the orthogonality constraint (Eq. 18) is unnecessary.However, this orthogonality constraint can accelerate the convergence rate if we employ it for theinitial iteration to update C(see Appendix A.5).3 E XPERIMENTAL RESULTSWe have applied our methods to the natural images from Olshausen’s image dataset (Olshausen &Field, 1996) and the images of handwritten digits from MNIST dataset (LeCun et al., 1998) usingMatlab 2016a on a computer with 12 Intel CPU cores (2.4 GHz). The gray level of each raw imagewas normalized to the range of 0to1.Mimage patches with size ww=Kfor training wererandomly sampled from the images. We used the Poisson neuron model with a modified sigmoidaltuning function ~f(y;~) =14(1+exp(yb))2, withg(y) = 2q~f(y;~) =11+exp(yb), where~= (;b)T. We obtained the initial values (see Appendix A.3.2): b0= 0and01:81qK1K10.For our experiments, we set = 0:50for iteration epoch t= 1;;t0and=0fort=t0+ 1;;tmax, wheret0= 50 .Firstly, we tested the case of K=K0=K1= 144 and randomly sampled M= 105image patcheswith size 1212from the Olshausen’s natural images, assuming that N= 106neurons were dividedintoK1= 144 classes and= 1(see Eq. A.52 in Appendix). The input patches were preprocessedby the ZCA whitening filters (see Eq. A.68). To test our algorithms, we chose the batch size to beequal to the number of training samples M, although we could also choose a smaller batch size. Weupdated the matrix Cfrom a random start, and set parameters tmax= 300 ,v1= 0:4, and= 0:8for all experiments.In this case, the optimal solution Clooked similar to the optimal solution of IICA (Bell & Sejnowski,1997). We also compared with the fast ICA algorithm (FICA) (Hyv ̈arinen, 1999), which is fasterthan IICA. We also tested the restricted Boltzmann machine (RBM) (Hinton et al., 2006) for aunsupervised learning of representations, and found that it could not easily learn Gabor-like filtersfrom Olshausen’s image dataset as trained by contrastive divergence. However, an improved methodby adding a sparsity constraint on the output units, e.g., sparse RBM (SRBM) (Lee et al., 2008) orsparse autoencoder (Hinton, 2010), could attain Gabor-like filters from this dataset. Similar resultswith Gabor-like filters were also reproduced by the denoising autoencoders (Vincent et al., 2010),which method requires a careful choice of parameters, such as noise level, learning rate, and batchsize.In order to compare our methods, i.e. Algorithm 1 (Alg.1, see Appendix A.4.1) and Algorithm2 (Alg.2, see Appendix A.4.2), with other methods, i.e. IICA, FICA and SRBM, we implementedthese algorithms using the same initial weights and the same training data set (i.e. 105image patchespreprocessed by the ZCA whitening filters). To get a good result by IICA, we must carefully selectthe parameters; we set the batch size as 50, the initial learning rate as 0:01, and final learning rateas0:0001 , with an exponential decay with the epoch of iterations. IICA tends to have a fasterconvergence rate for a bigger batch size but it may become harder to escape local minima. ForFICA, we chose the nonlinearity function f(u) = log cosh( u)as contrast function, and for SRBM,we set the sparseness control constant pas0:01and0:03. The number of epoches for iterations wasset to 300for all algorithms. Figure 2 shows the filters learned by our methods and other methods.Each filter in Figure 2(a) corresponds to a column vector of matrix C(see Eq. A.69), where eachvector for display is normalized by ck ck=max(jc1;kj;;jcK;kj),k= 1;;K1. The resultsin Figures 2(a), 2(b) and 2(c) look very similar to one another, and slightly different from the resultsin Figure 2(d) and 2(e). There are no Gabor-like filters in Figure 2(f), which corresponds to SRBMwithp= 0:03.Figure 3 shows how the coefficient entropy (CFE) (see Eq. A.122) and the conditional entropy(CDE) (see Eq. A.125) varied with training time. We calculated CFE and CDE by sampling onceevery 10epoches from a total of 300epoches. These results show that our algorithms had a fastconvergence rate towards stable solutions while having CFE and CDE values similar to the algorithmof IICA, which converged much more slowly. Here the values of CFE and CDE should be as small6Published as a conference paper at ICLR 2017(a) (b) (c)(d) (e) (f)Figure 2: Comparison of filters obtained from 105natural image patches of size 12 12 by ourmethods (Alg.1 and Alg.2) and other methods. The number of output filters was K1= 144 . (a):Alg.1. ( b): Alg.2. ( c): IICA. ( d): FICA. ( e): SRBM (p= 0:01). (f): SRBM (p= 0:03).100101102time (seconds)1.81.851.91.952coefficient entropy (bits)Alg.1Alg.2IICAFICASRBM (p = 0.01)SRBM (p = 0.03)(a)100101102time (seconds)-400-350-300-250-200-150conditional entropy (bits)Alg.1Alg.2IICA (b)100101102time (seconds)-200-1000100200300conditional entropy (bits)SRBM (p = 0.01)SRBM (p = 0.03)SRBM (p = 0.05)SRBM (p = 0.10) (c)Figure 3: Comparison of quantization effects and convergence rate by coefficient entropy (seeA.122) and conditional entropy (see A.125) corresponding to training results (filters) shown in Fig-ure 2. The coefficient entropy (panel a) and conditional entropy (panel bandc) are shown as afunction of training time on a logarithmic scale. All experiments run on the same machine usingMatlab. Here we sampled once every 10epoches out of a total of 300epoches. We set epoch numbert0= 50 for Alg.1 and Alg.2 and the start time to 1second.as possible for a good representation learned from the same data set. Here we set epoch numbert0= 50 in our algorithms (see Alg.1 and Alg.2), and the start time was set to 1second. Thisexplains the step seen in Figure 3 (b) for Alg.1 and Alg.2 since the parameter was updated whenepoch number t=t0. FICA had a convergence rate close to our algorithms but had a big CFE,which is reflected by the quality of the filter results in Figure 2. The convergence rate and CFE forSRBM were close to IICA, but SRBM had a much bigger CDE than IICA, which implies that theinformation had a greater loss when passing through the system optimized by SRBM than by IICAor our methods.7Published as a conference paper at ICLR 2017From Figure 3(c) we see that the CDE (or MI I(X;R), see Eq. A.124 and A.125) decreases (orincreases) with the increase of the value of the sparseness control constant p. Note that a smallerpmeans sparser outputs. Hence, in this sense, increasing sparsity may result in sacrificing someinformation. On the other hand, a weak sparsity constraint may lead to failure of learning Gabor-like filters (see Figure 2(f)), and increasing sparsity has an advantage in reducing the impact ofnoise in many practical cases. Similar situation also occurs in sparse coding (Olshausen & Field,1997), which provides a class of algorithms for learning overcomplete dictionary representations ofthe input signals. However, its training is time consuming due to its expensive computational cost,although many new training algorithms have emerged (e.g. Aharon et al., 2006; Elad & Aharon,2006; Lee et al., 2006; Mairal et al., 2010). See Appendix A.5 for additional experimental results.4 C ONCLUSIONSIn this paper, we have presented a framework for unsupervised learning of representations via in-formation maximization for neural populations. Information theory is a powerful tool for machinelearning and it also provides a benchmark of optimization principle for neural information pro-cessing in nervous systems. Our framework is based on an asymptotic approximation to MI for alarge-scale neural population. To optimize the infomax objective, we first use hierarchical infomaxto obtain a good approximation to the global optimal solution. Analytical solutions of the hierarchi-cal infomax are further improved by a fast convergence algorithm based on gradient descent. Thismethod allows us to optimize highly nonlinear neural networks via hierarchical optimization usinginfomax principle.From the viewpoint of information theory, the unsupervised pre-training for deep learning (Hinton &Salakhutdinov, 2006; Bengio et al., 2007) may be reinterpreted as a process of hierarchical infomax,which might help explain why unsupervised pre-training helps deep learning (Erhan et al., 2010). Inour framework, a pre-whitening step can emerge naturally by the hierarchical infomax, which mightalso explain why a pre-whitening step is useful for training in many learning algorithms (Coateset al., 2011; Bengio, 2012).Our model naturally incorporates a considerable degree of biological realism. It allows the opti-mization of a large-scale neural population with noisy spiking neurons while taking into account ofmultiple biological constraints, such as membrane noise, limited energy, and bounded connectionweights. We employ a technique to attain a low-rank weight matrix for optimization, so as to reducethe influence of noise and discourage overfitting during training. In our model, many parametersare optimized, including the population density of parameters, filter weight vectors, and parametersfor nonlinear tuning functions. Optimizing all these model parameters could not be easily done bymany other methods.Our experimental results suggest that our method for unsupervised learning of representations hasobvious advantages in its training speed and robustness over the main existing methods. Our modelhas a nonlinear feedforward structure and is convenient for fast learning and inference. This simpleand flexible framework for unsupervised learning of presentations should be readily extended totraining deep structure networks. In future work, it would interesting to use our method to train deepstructure networks with either unsupervised or supervised learning.ACKNOWLEDGMENTSWe thank Prof. Honglak Lee for sharing Matlab code for algorithm comparison, Prof. Shan Tan fordiscussions and comments and Kai Liu for helping draw Figure 1. Supported by grant NIH-NIDCDR01 DC013698.REFERENCESAharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcompletedictionaries for sparse representation. Signal Processing, IEEE Transactions on , 54(11), 4311–4322.8Published as a conference paper at ICLR 2017Amari, S. (1999). Natural gradient learning for over- and under-complete bases in ica. NeuralComput. , 11(8), 1875–1883.Atick, J. J. (1992). Could information theory provide an ecological theory of sensory processing?Network: Comp. Neural. , 3(2), 213–251.Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. Sen-sory Communication , (pp. 217–234).Bell, A. J. & Sejnowski, T. J. (1995). An information-maximization approach to blind separationand blind deconvolution. Neural Comput. , 7(6), 1129–1159.Bell, A. J. & Sejnowski, T. J. (1997). The ”independent components” of natural scenes are edgefilters. Vision Res. , 37(23), 3327–3338.Bengio, Y . (2012). Deep learning of representations for unsupervised and transfer learning. Unsu-pervised and Transfer Learning Challenges in Machine Learning , 7, 19.Bengio, Y ., Courville, A., & Vincent, P. (2013). Representation learning: A review and new per-spectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 35(8), 1798–1828.Bengio, Y ., Lamblin, P., Popovici, D., Larochelle, H., et al. (2007). Greedy layer-wise training ofdeep networks. Advances in neural information processing systems , 19, 153.Borst, A. & Theunissen, F. E. (1999). Information theory and neural coding. Nature neuroscience ,2(11), 947–957.Carlo, C. N. & Stevens, C. F. (2013). Structural uniformity of neocortex, revisited. Proceedings ofthe National Academy of Sciences , 110(4), 1488–1493.Coates, A., Ng, A. Y ., & Lee, H. (2011). An analysis of single-layer networks in unsupervisedfeature learning. In International conference on artificial intelligence and statistics (pp. 215–223).Cortes, C. & Vapnik, V . (1995). Support-vector networks. Machine learning , 20(3), 273–297.Cover, T. M. & Thomas, J. A. (2006). Elements of Information, 2nd Edition . New York: Wiley-Interscience.Edelman, A., Arias, T. A., & Smith, S. T. (1998). The geometry of algorithms with orthogonalityconstraints. SIAM J. Matrix Anal. Appl. , 20(2), 303–353.Elad, M. & Aharon, M. (2006). Image denoising via sparse and redundant representations overlearned dictionaries. Image Processing, IEEE Transactions on , 15(12), 3736–3745.Erhan, D., Bengio, Y ., Courville, A., Manzagol, P.-A., Vincent, P., & Bengio, S. (2010). Why doesunsupervised pre-training help deep learning? The Journal of Machine Learning Research , 11,625–660.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., &Bengio, Y . (2014). Generative adversarial nets. In Advances in Neural Information ProcessingSystems (pp. 2672–2680).Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Momentum , 9(1),926.Hinton, G., Osindero, S., & Teh, Y .-W. (2006). A fast learning algorithm for deep belief nets. Neuralcomputation , 18(7), 1527–1554.Hinton, G. E. & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neuralnetworks. Science , 313(5786), 504–507.Huang, W. & Zhang, K. (2017). Information-theoretic bounds and approximations in neural popu-lation coding. Neural Comput, submitted, URL https://arxiv.org/abs/1611.01414 .9Published as a conference paper at ICLR 2017Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional archi-tecture in the cat’s visual cortex. The Journal of physiology , 160(1), 106–154.Hyv ̈arinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis.Neural Networks, IEEE Transactions on , 10(3), 626–634.Karklin, Y . & Simoncelli, E. P. (2011). Efficient coding of natural images with a population of noisylinear-nonlinear neurons. In Advances in neural information processing systems , volume 24 (pp.999–1007).Konstantinides, K. & Yao, K. (1988). Statistical analysis of effective singular values in matrix rankdetermination. Acoustics, Speech and Signal Processing, IEEE Transactions on , 36(5), 757–763.Kreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T. S., & Sejnowski, T. J. (2003).Dictionary learning algorithms for sparse representation. Neural computation , 15(2), 349–396.LeCun, Y ., Bottou, L., Bengio, Y ., & Haffner, P. (1998). Gradient-based learning applied to docu-ment recognition. Proceedings of the IEEE , 86(11), 2278–2324.Lee, H., Battle, A., Raina, R., & Ng, A. Y . (2006). Efficient sparse coding algorithms. In Advancesin neural information processing systems (pp. 801–808).Lee, H., Ekanadham, C., & Ng, A. Y . (2008). Sparse deep belief net model for visual area v2. InAdvances in neural information processing systems (pp. 873–880).Lewicki, M. S. & Olshausen, B. A. (1999). Probabilistic framework for the adaptation and compar-ison of image codes. JOSA A , 16(7), 1587–1601.Lewicki, M. S. & Sejnowski, T. J. (2000). Learning overcomplete representations. Neural compu-tation , 12(2), 337–365.Linsker, R. (1988). Self-Organization in a perceptual network. Computer , 21(3), 105–117.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2009). Online dictionary learning for sparse coding.InProceedings of the 26th annual international conference on machine learning (pp. 689–696).:ACM.Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization andsparse coding. The Journal of Machine Learning Research , 11, 19–60.Montufar, G. F., Pascanu, R., Cho, K., & Bengio, Y . (2014). On the number of linear regions of deepneural networks. In Advances in Neural Information Processing Systems (pp. 2924–2932).Nair, V . & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. InProceedings of the 27th International Conference on Machine Learning (ICML-10) (pp. 807–814).Olshausen, B. A. & Field, D. J. (1996). Emergence of simple-cell receptive field properties bylearning a sparse code for natural images. Nature , 381(6583), 607–609.Olshausen, B. A. & Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision Res. , 37(23), 3311–3325.Rao, C. R. (1945). Information and accuracy attainable in the estimation of statistical parameters.Bulletin of the Calcutta Mathematical Society , 37(3), 81–91.Shannon, C. (1948). A mathematical theory of communications. Bell System Technical Journal , 27,379–423 and 623–656.Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout:A simple way to prevent neural networks from overfitting. The Journal of Machine LearningResearch , 15(1), 1929–1958.10Published as a conference paper at ICLR 2017Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y ., & Manzagol, P.-A. (2010). Stacked denoisingautoencoders: Learning useful representations in a deep network with a local denoising criterion.The Journal of Machine Learning Research , 11, 3371–3408.Yarrow, S., Challis, E., & Series, P. (2012). Fisher and shannon information in finite neural popula-tions. Neural computation , 24(7), 1740–1780.APPENDIXA.1 F ORMULAS FOR APPROXIMATION OF MUTUAL INFORMATIONIt follows from I(X;R) =Dlnp(xjr)p(x)Er;xand Eq. (1) that the conditional entropy should read:H(XjR) =hlnp(xjr)ir;x'12lndetG(x)2ex. (A.1)The Fisher information matrix J(x)(see Eq. 3), which is symmetric and positive semidefinite, canbe written also asJ(x) =@lnp(rjx)@x@lnp(rjx)@xTrjx. (A.2)If we suppose p(rjx)is conditional independent, namely, p(rjx) =QNn=1p(rnjx;n), then wehave (see Huang & Zhang, 2017)J(x) =NZp()S(x;)d, (A.3)S(x;) =@lnp(rjx;)@x@lnp(rjx;)@xTrjx, (A.4)wherep()is the population density function of parameter ,p() =1NNXn=1(n), (A.5)and()denotes the Dirac delta function. It can be proved that the approximation function of MIIG[p()](Eq. 1) is concave about p()(Huang & Zhang, 2017). In Eq. (A.3), we can approximatethe continuous integral by a discrete summation for numerical computation,J(x)NK1Xk=1kS(x;k), (A.6)wherePK1k=1k= 1,k>0,k= 1;;K1,1K1N.For Poisson neuron model, by Eq. (A.4) we have (see Huang & Zhang, 2017)p(rjx;) =f(x;)rr!exp (f(x;)), (A.7)S(x;) =1f(x;)@f(x;)@x@f(x;)@xT=@g(x;)@x@g(x;)@xT, (A.8)wheref(x;)0is the activation function (mean response) of neuron andg(x;) = 2pf(x;). (A.9)11Published as a conference paper at ICLR 2017Similarly, for Gaussian noise model, we havep(rjx;) =1p2exp (rf(x;))222!, (A.10)S(x;) =12@f(x;)@x@f(x;)@xT, (A.11)where>0denotes the standard deviation of noise.Sometimes we do not know the specific form of p(x)and only know Msamples, x1,,xM,which are independent and identically distributed (i.i.d.) samples drawn from the distribution p(x).Then we can use the empirical average to approximate the integral in Eq. (1):IG12MXm=1ln (det ( G(xm))) +H(X). (A.12)A.2 P ROOF OF PROPOSITION 1Proof. It follows from the data-processing inequality (Cover & Thomas, 2006) thatI(X;R)I(Y;R)I(Y;R)I(Y;R), (A.13)I(X;R)I(X;Y)I(X;Y)I(X;Y). (A.14)Sincep(ykjx) =p(yk1;;ykNkjx) =N(wTkx; N1k2),k= 1;;K1, (A.15)we havep( yjx) =p( yjx), (A.16)p( y) =p( y), (A.17)I(X;Y) =I(X;Y). (A.18)Hence, by (A.14) and (A.18), expression (12) holds.On the other hand, when Nkis large, from Eq. (10) we know that the distribution of Zk, namely,N0,N1k2, approaches a Dirac delta function (zk). Then by (7) and (9) we have p(rj y)'p(rjy) =p(rjx)andI(X;R) =I(Y;R)lnp(rjy)p(rjx)r;x=I(Y;R), (A.19)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.20)I(Y;R) =I(Y;R)lnp(rj y)p(rjy)r;y; y'I(Y;R), (A.21)I(X;Y) =I(X;Y)lnp(xj y)p(xjy)x;y; y'I(X;Y). (A.22)It follows from (A.13) and (A.19) that (11) holds. Combining (11), (12) and (A.20)–(A.22), weimmediately get (13) and (14). This completes the proof of Proposition 1 . A.3 H IERARCHICAL OPTIMIZATION FOR MAXIMIZING I(X;R)In the following, we will discuss the optimization procedure for maximizing I(X;R)in two stages:maximizing I(X;Y)and maximizing I(Y;R).12Published as a conference paper at ICLR 2017A.3.1 T HE1STSTAGEIn the first stage, our goal is to maximize the MI I(X;Y)and get the optimal parameters wk(k= 1;;K1). Assume that the stimulus xhas zero mean (if not, let x xhxix) andcovariance matrix x. It follows from eigendecomposition thatx=xxTx1M1XXT=UUT, (A.23)where X= [x1,,xM],U= [u1;;uK]2RKKis an unitary orthogonal matrix and =diag21;;2Kis a positive diagonal matrix with 1K>0. Define~ x=1=2UTx, (A.24)~ wk=1=2UTwk, (A.25)yk=~ wTk~ x, (A.26)wherek= 1;;K1. The covariance matrix of ~ xis given by~ x=D~ x~ xTE~ xIK, (A.27)andIKis aKKidentity matrix. From (1) and (A.11) we have I(X;Y) =I(~X;Y)andI(~X;Y)'I0G=12ln det ~G2e!!+H(~X), (A.28)~GN2K1Xk=1k~ wk~ wTk+IK. (A.29)The following approximations are useful (see Huang & Zhang, 2017):p(~ x)N (0;IK), (A.30)P(~ x) =@2lnp(~ x)@~ x@~ xTIK. (A.31)By the central limit theorem, the distribution of random variable ~Xis closer to a normal distribu-tion than the distribution of the original random variable X. On the other hand, the PCA modelsassume multivariate gaussian data whereas the ICA models assume multivariate non-gaussian data.Hence by a PCA-like whitening transformation (A.24) we can use the approximation (A.31) withthe Laplace’s method of asymptotic expansion, which only requires that the peak be close to itsmean while random variable ~Xneeds not be exactly Gaussian.Without any constraints on the Gaussian channel of neural populations, especially the peak firingrates, the capacity of this channel may grow indefinitely: I(~X;Y)! 1 . The most commonconstraint on the neural populations is an energy or power constraint which can also be regarded asa signal-to-noise ratio (SNR) constraint. The SNR for the output ynof the n-th neuron is given bySNRn=12DwTnx2Ex12~ wTn~ wn,n= 1;;N. (A.32)We require that1NNXn=1SNRn12K1Xk=1k~ wTk~ wk, (A.33)whereis a positive constant. Then by Eq. (A.28), (A.29) and (A.33), we have the followingoptimization problem:minimizeQ0G[^W] =12lndetN2^W^WT+IK, (A.34)subject toh= Tr^W^WTE0, (A.35)13Published as a conference paper at ICLR 2017where Tr ()denotes matrix trace and^W=~WA1=2=1=2UTWA1=2= [^ w1;;^ wK1], (A.36)A= diag (1;;K1), (A.37)W= [w1;;wK1], (A.38)~W= [~ w1;;~ wK1], (A.39)E=2. (A.40)HereEis a constant that does not affect the final optimal solution so we set E= 1. Then we obtainan optimal solution as follows:W=aU01=20VT0, (A.41)A=K11IK1, (A.42)a=qEK1K10=qK1K10, (A.43)0= diag21;;2K0, (A.44)U0=U(:;1:K0)2RKK0, (A.45)V0=V(:;1:K0)2RK1K0, (A.46)where V= [v1;;vK1]is anK1K1unitary orthogonal matrix, parameter K0represents thesize of the reduced dimension ( 1K0K), and its value will be determined below. Now theoptimal parameters wn(n= 1;;N) are clustered into K1classes (see Eq. A.6) and obey anuniform discrete distribution (see also Eq. A.60 in Appendix A.3.2).WhenK=K0=K1, the optimal solution of Win Eq. (A.41) is a whitening-like filter. WhenV=IK, the optimal matrix Wis the principal component analysis (PCA) whitening filters. In thesymmetrical case with V=U, the optimal matrix Wbecomes a zero component analysis (ZCA)whitening filter. If K <K 1, this case leads to an overcomplete solution, whereas when K0K1<K, the undercomplete solution arises. Since K0K1andK0K,Q0Gachieves its minimumwhenK0=K. However, in practice other factors may prevent it from reaching this minimum. Forexample, consider the average of squared weights,&=K1Xk=1kkwkk2= TrWAWT=EK0K0Xk=12k, (A.47)wherekkdenotes the Frobenius norm. The value of &is extremely large when any kbecomesvanishingly small. For real neurons these weights of connection are not allowed to be too large.Hence we impose a limitation on the weights: &E1, whereE1is a positive constant. This yieldsanother constraint on the objective function,~h=EK0K0Xk=12kE10. (A.48)From (A.35) and (A.48) we get the optimal K0= arg max ~K0E~K10P~K0k=12k. By this con-straint, small values of 2kwill often result in K0<K and a low-rank matrix W(Eq. A.41).On the other hand, the low-rank matrix Wcan filter out the noise of stimulus x. Consider thetransformation Y=WTXwithX= [x1,,xM]andY= [y1,,yM]forMsamples. Itfollows from the singular value decomposition (SVD) of XthatX=US~VT, (A.49)where Uis given in (A.23), ~Vis aMMunitary orthogonal matrix, Sis aKMdiagonal matrixwith non-negative real numbers on the diagonal, Sk;k=pM1k(k= 1;;K,KM), andSST= (M1). LetX=pM1U01=20~VT0X, (A.50)14Published as a conference paper at ICLR 2017where ~V0=~V(:;1:K0)2RMK0,0andU0are given in (A.44) and (A.45), respectively. ThenY=WTX=aV01=20UT0US~VT=WTX=apM1V0~VT0, (A.51)where Xcan be regarded as a denoised version of X. The determination of the effective rankK0Kof the matrix Xby using singular values is based on various criteria (Konstantinides &Yao, 1988). Here we choose K0as follows:K0= arg minK000@vuutPK00k=12kPKk=12k1A, (A.52)whereis a positive constant ( 0<1).Another advantage of a low-rank matrix Wis that it can significantly reduce overfitting for learningneural population parameters. In practice, the constraint (A.47) is equivalent to a weight-decay reg-ularization term used in many other optimization problems (Cortes & Vapnik, 1995; Hinton, 2010),which can reduce overfitting to the training data. To prevent the neural networks from overfitting,Srivastava et al. (2014) presented a technique to randomly drop units from the neural network dur-ing training, which may in fact be regarded as an attempt to reduce the rank of the weight matrixbecause the dropout can result in a sparser weights (lower rank matrix). This means that the updateis only concerned with keeping the more important components, which is similar to first performinga denoising process by the SVD low rank approximation.In this stage, we have obtained the optimal parameter W(see A.41). The optimal value of matrixV0can also be determined, as shown in Appendix A.3.3.A.3.2 T HE2NDSTAGEFor this stage, our goal is to maximize the MI I(Y;R)and get the optimal parameters ~k,k= 1;;K1. Here the input is y= (y1;;yK1)Tand the output r= (r1;;rN)Tisalso clustered into K1classes. The responses of Nkneurons in the k-th subpopulation obey a Pois-son distribution with mean ~f(eTky;~k), where ekis a unit vector with 1in thek-th element andyk=eTky. By (A.24) and (A.26), we havehykiyk= 0, (A.53)2yk=y2kyk=k~ wkk2. (A.54)Then for large N, by (1)–(4) and (A.30) we can use the following approximation,I(Y;R)'IF=12*ln det J(y)2e!!+y+H(Y), (A.55)whereJ(y) = diagN1jg01(y1)j2;;NK1g0K1(yK1)2, (A.56)g0k(yk) =@gk(yk)@yk,k= 1;;K1, (A.57)gk(yk) = 2q~f(yk;~k),k= 1;;K1. (A.58)It is easy to get thatIF=12K1Xk=1*ln Nkjg0k(yk)j22e!+y+H(Y)12K1Xk=1*ln jg0k(yk)j22e!+yK12lnK1N+H(Y), (A.59)15Published as a conference paper at ICLR 2017where the equality holds if and only ifk=1K1;k= 1;;K1, (A.60)which is consistent with Eq. (A.42).On the other hand, it follows from the Jensen’s inequality thatIF=*ln0@p(y)1det J(y)2e!1=21A+ylnZdet J(y)2e!1=2dy, (A.61)where the equality holds if and only if p(y)1detJ(y)1=2is a constant, which implies thatp(y) =detJ(y)1=2RdetJ(y)1=2dy=QK1k=1jg0k(yk)jRQK1k=1jg0k(yk)jdy. (A.62)From (A.61) and (A.62), maximizing ~IFyieldsp(yk) =jg0k(yk)jRjg0k(yk)jdyk,k= 1;;K1. (A.63)We assume that (A.63) holds, at least approximately. Hence we can let the peak of g0k(yk)be atyk=hykiyk= 0andy2kyk=2yk=k~ wkk2. Then combining (A.57), (A.61) and (A.63) we findthe optimal parameters ~kfor the nonlinear functions ~f(yk;~k),k= 1;;K1.A.3.3 T HEFINAL OBJECTIVE FUNCTIONIn the preceding sections we have obtained the initial optimal solutions by maximizing IX;YandI(Y;R). In this section, we will discuss how to find the final optimal V0and other parametersby maximizing I(X;R)from the initial optimal solutions.First, we havey=~WT~ x=a^ y, (A.64)whereais given in (A.43) and^ y= (^y1;;^yK1)T=CT^ x=CT x, (A.65)^ x=1=20UT0x, (A.66)C=VT02RK0K1, (A.67) x=U01=20UT0x=U0^ x, (A.68)C=U0C= [ c1;; cK1]. (A.69)It follows thatI(X;R) =I~X;R'~IG=12lndetG(^ x)2e^ x+H(~X), (A.70)G(^ x) =N^W^^WT+IK, (A.71)^W=1=2UTWA1=2=aqK11IKK0C=qK10IKK0C, (A.72)16Published as a conference paper at ICLR 2017where IKK0is aKK0diagonal matrix with value 1on the diagonal and^=2, (A.73)= diag ((^y1);;(^yK1)), (A.74)(^yk) =a1@gk(^yk)@^yk, (A.75)gk(^yk) = 2q~f(^yk;~k), (A.76)^yk=a1yk=cTk^ x,k= 1;;K1. (A.77)Then we havedet (G(^ x)) = detNK10C^CT+IK0. (A.78)For largeNandK0=N!0, we havedet (G(^ x))det (J(^ x)) = detNK10C^CT, (A.79)~IG~IF=QK2ln (2e)K02ln (") +H(~X), (A.80)Q=12DlndetC^CTE^ x, (A.81)"=K0N. (A.82)Hence we can state the optimization problem as:minimizeQ[C] =12DlndetC^CTE^ x, (A.83)subject to CCT=IK0. (A.84)The gradient from (A.83) is given by:dQ[C]dC=C^CT1C^+^ x!T^ x, (A.85)where C= [c1;;cK1],!= (!1;;!K1)T, and!k=(^yk)0(^yk)cTkC^CT1ck,k= 1;;K1. (A.86)In the following we will discuss how to get the optimal solution of Cfor two specific cases.A.4 A LGORITHMS FOR OPTIMIZATION OBJECTIVE FUNCTIONA.4.1 A LGORITHM 1:K0=K1NowCCT=CTC=IK1, then by Eq. (A.83) we haveQ1[C] =*K1Xk=1ln ((^yk))+^ x, (A.87)dQ1[C]dC=^ x!T^ x, (A.88)!k=0(^yk)(^yk),k= 1;;K1. (A.89)Under the orthogonality constraints (Eq. A.84), we can use the following update rule for learning C(Edelman et al., 1998; Amari, 1999):Ct+1=Ct+tdCtdt, (A.90)dCtdt=dQ1[Ct]dCt+CtdQ1[Ct]dCtTCt, (A.91)17Published as a conference paper at ICLR 2017where the learning rate parameter tchanges with the iteration count t,t= 1;;tmax. Here wecan use the empirical average to approximate the integral in (A.88) (see Eq. A.12). We can alsoapply stochastic gradient descent (SGD) method for online updating of Ct+1in (A.90).The orthogonality constraint (Eq. A.84) can accelerate the convergence rate. In practice, the orthog-onal constraint (A.84) for objective function (A.83) is not strictly necessary in this case. We cancompletely discard this constraint condition and considerminimizeQ2[C] =*K1Xk=1ln ((^yk))+^ x12lndetCTC, (A.92)where we assume rank ( C) =K1=K0. If we letdCdt=CCTdQ2[C]dC, (A.93)thenTrdQ2[C]dCdCTdt=TrCTdQ2[C]dCdQ2[C]dCTC0. (A.94)Therefore we can use an update rule similar to Eq. A.90 for learning C. In fact, the method can alsobe extended to the case K0>K 1by using the same objective function (A.92).The learning rate parameter t(see A.90) is updated adaptively, as follows. First, calculatet=vtt,t= 1;;tmax, (A.95)t=1K1K1Xk=1krCt(:;k)kkCt(:;k)k, (A.96)andCt+1by (A.90) and (A.91), then calculate the value Q1Ct+1. IfQ1Ct+1<Q 1[Ct], thenletvt+1 vt, continue for the next iteration; otherwise, let vt vt,t vt=tand recalculateCt+1andQ1Ct+1. Here 0< v1<1and0< < 1are set as constants. After getting Ct+1for each update, we employ a Gram–Schmidt orthonormalization process for matrix Ct+1, wherethe orthonormalization process can accelerate the convergence. However, we can discard the Gram–Schmidt orthonormalization process after iterative t0(>1) epochs for more accurate optimizationsolution C. In this case, the objective function is given by the Eq. (A.92). We can also furtheroptimize parameter bby gradient descent.WhenK0=K1, the objective function Q2[C]in Eq. (A.92) without constraint is the same as theobjective function of infomax ICA (IICA) (Bell & Sejnowski, 1995; 1997), and as a consequencewe should get the same optimal solution C. Hence, in this sense, the IICA may be regarded as aspecial case of our method. Our method has a wider range of applications and can handle moregeneric situations. Our model is derived by neural populations with a huge number of neurons and itis not restricted to additive noise model. Moreover, our method has a faster convergence rate duringtraining than IICA (see Section 3).A.4.2 A LGORITHM 2:K0K1In this case, it is computationally expensive to update Cby using the gradient of Q(see Eq. A.85),since it needs to compute the inverse matrix for every ^ x. Here we provide an alternative method forlearning the optimal C. First, we consider the following inequalities.18Published as a conference paper at ICLR 2017Proposition 2. The following inequations hold,12DlndetC^CTE^ x12lndetCD^E^ xCT, (A.97)lndetCCT^ xlndetChi^ xCT(A.98)12lndetChi2^ xCT(A.99)12lndetCD^E^ xCT, (A.100)lndetCCT12lndetC^CT, (A.101)where C2RK0K1,K0K1, andCCT=IK0.Proof. Functions lndetCD^E^ xCTandlndetChi^ xCTare concave functions aboutp(^ x)(see the proof of Proposition 5.2. in Huang & Zhang, 2017), which fact establishes inequalities(A.97) and (A.98).Next we will prove the inequality (A.101). By SVD, we haveC=UDVT, (A.102)where Uis aK0K0unitary orthogonal matrix, V= [ v1; v2;; vK1]is anK1K1unitaryorthogonal matrix, and Dis anK0K1rectangular diagonal matrix with K0positive real numberson the diagonal. By the matrix Hadamard’s inequality and Cauchy–Schwarz inequality we havedetCCTCCTdetC^CT1= detDVTCTCVDTDDT1= detVT1CTCV1= detCV12K0Yk=1CV12k;kK0Yk=1CCT2k;kVT1V12k;k= 1, (A.103)where V1= [ v1; v2;; vK0]2RK1K0. The last equality holds because of CCT=IK0andVT1V1=IK0. This establishes inequality (A.101) and the equality holds if and only if K0=K1orCV1=IK0.Similarly, we get inequality (A.99):lndetChi^ xCT12lndetChi2^ xCT. (A.104)By Jensen’s inequality, we haveh(^yk)i2^ xD(^yk)2E^ x,8k= 1;;K1. (A.105)Then it follows from (A.105) that inequality (A.100) holds:12lndetChi2^ xCT12lndetCD^E^ xCT. (A.106)19Published as a conference paper at ICLR 2017This completes the proof of Proposition 2 . ByProposition 2, ifK0=K1then we get12Dlndet^E^ x12lndetD^E^ x, (A.107)hln (det ( ))i^ xln (det (hi^ x)) (A.108)=12lndethi2^ x(A.109)12lndetD^E^ x, (A.110)ln (det ( )) =12lndet^. (A.111)On the other hand, it follows from (A.81) and Proposition 2 thatlndetCCT^ xQ12lndetCD^E^ xCT, (A.112)lndetCCT^ x^Q12lndetCD^E^ xCT. (A.113)Hence we can see that ^Qis close toQ(see A.81). Moreover, it follows from the Cauchy–Schwarzinequality thatD()k;kE^ x=h(^yk)i^ykZ(^yk)2d^ykZp(^yk)2d^yk1=2, (A.114)wherek= 1;;K1, the equality holds if and only if the following holds:p(^yk) =(^yk)R(^yk)d^yk,k= 1;;K1, (A.115)which is the similar to Eq. (A.63).SinceI(X;R) =I(Y;R)(seeProposition 1), by maximizing I(X;R)we hope the equality ininequality (A.61) and equality (A.63) hold, at least approximatively. On the other hand, letCopt= arg minCQ[C] = arg maxCDlndet(C^CT)E^ x, (A.116)^Copt= arg minC^Q[C] = arg maxClndetChi2^ xCT, (A.117)Coptand^Coptmake (A.63) and (A.115) to hold true, which implies that they are the same optimalsolution: Copt=^Copt.Therefore, we can use the following objective function ^Q[C]as a substitute for Q[C]and write theoptimization problem as:minimize ^Q[C] =12lndetChi2^ xCT, (A.118)subject to CCT=IK0. (A.119)The update rule (A.90) may also apply here and a modified algorithm similar to Algorithm 1 maybe used for parameter learning.A.5 S UPPLEMENTARY EXPERIMENTSA.5.1 Q UANTITATIVE METHODS FOR COMPARISONTo quantify the efficiency of learning representations by the above algorithms, we calculate the co-efficient entropy (CFE) for estimating coding cost as follows (Lewicki & Olshausen, 1999; Lewicki& Sejnowski, 2000):yk= wTk x,k= 1;;K1, (A.120)=K1PK1k=1k wkk, (A.121)20Published as a conference paper at ICLR 2017where xis defined by Eq. (A.68), and wkis the corresponding optimal filter. To estimate theprobability density of coefficients qk(yk)(k= 1;;K1) from theMtraining samples, we applythe kernel density estimation for qk(yk)and use a normal kernel with an adaptive optimal windowwidth. Then we define the CFE hash=1K1K1Xk=1Hk(Yk), (A.122)Hk(Yk) =Pnqk(n) log2qk(n), (A.123)whereqk(yk)is quantized as discrete qk(n)andis the step size.Methods such as IICA and SRBM as well as our methods have feedforward structures in whichinformation is transferred directly through a nonlinear function, e.g., the sigmoid function. Wecan use the amount of transmitted information to measure the results learned by these methods.Consider a neural population with Nneurons, which is a stochastic system with nonlinear transferfunctions. We chose a sigmoidal transfer function and Gaussian noise with standard deviation set to1as the system noise. In this case, from (1), (A.8) and (A.11), we see that the approximate MI IGisequivalent to the case of the Poisson neuron model. It follows from (A.70)–(A.82) thatI(X;R) =I~X;R=H(~X)H~XjR'~IG=H(~X)h1, (A.124)H~XjR'h1=12lndet12eNK10C^CT+IK0^ x, (A.125)where we set N= 106. A good representation should make the MI I(X;R)as big as possible.Equivalently, for the same inputs, a good representation should make the conditional entropy (CDE)H~XjR(orh1) as small as possible.(a) (b) (c)(d) (e) (f)Figure 4: Comparison of basis vectors obtained by our method and other methods. Panel ( a)–(e)correspond to panel ( a)–(e) in Figure 2, where the basis vectors are given by (A.130). The basisvectors in panel ( f) are learned by MBDL and given by (A.127).21Published as a conference paper at ICLR 2017A.5.2 C OMPARISON OF BASIS VECTORSWe compared our algorithm with an up-to-date sparse coding algorithm, the mini-batch dictionarylearning (MBDL) as given in (Mairal et al., 2009; 2010) and integrated in Python library, i.e. scikit-learn. The input data was the same as the above, i.e. 105nature image patches preprocessed by theZCA whitening filters.We denotes the optimal dictionary learned by MBDL as B2RKK1for which each columnrepresents a basis vector. Now we havexU1=2UTBy=~By, (A.126)~B=U1=2UTB, (A.127)where y= (y1;;yK1)Tis the coefficient vector.Similarly, we can obtain a dictionary from the filter matrix C. Suppose rank ( C) =K0K1, thenit follows from (A.64) that^ x=aCCT1Cy. (A.128)By (A.66) and (A.128), we getxBy=aBCT1=20UT0x, (A.129)B=a1U01=20CCT1C= [b1;;bK1], (A.130)where y=WTx=aCT1=20UT0x, the vectors b1;;bK1can be regarded as the basis vectorsand the strict equality holds when K0=K1=K. Recall that X= [x1,,xM] =US~VT(see Eq. A.49) and Y= [y1,,yM] =WTX=apM1CT~VT0, then we get X=BY =pM1U01=20~VT0X. Hence, Eq. (A.129) holds.The basis vectors shown in Figure 4(a)–4(e) correspond to filters in Figure 2(a)–2(e). And Fig-ure 4(f) illustrates the optimal dictionary ~Blearned by MBDL, where we set the regularization pa-rameter as= 1:2=pK, the batch size as 50and the total number of iterations to perform as 20000 ,which took about 3hours for training. From Figure 4 we see that these basis vectors obtained by theabove algorithms have local Gabor-like shapes except for those by SRBM. If rank( B) =K=K1,then the matrix BTcan be regarded as a filter matrix like matrix C(see Eq. A.69). However,from the column vector of matrix BTwe cannot find any local Gabor-like filter that resembles thefilters shown in Figure 2. Our algorithm has less computational cost and a much faster convergencerate than the sparse coding algorithm. Moreover, the sparse coding method involves a dynamicgenerative model that requires relaxation and is therefore unsuitable for fast inference, whereas thefeedforward framework of our model is easy for inference because it only requires evaluating thenonlinear tuning functions.A.5.3 L EARNING OVERCOMPLETE BASESWe have trained our model on the Olshausen’s nature image patches with a highly overcompletesetup by optimizing the objective (A.118) by Alg.2 and got Gabor-like filters. The results of 400typical filters chosen from 1024 output filters are displayed in Figure 5(a) and corresponding base(see Eq. A.130) are shown in Figure 5(b). Here the parameters are K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8, and= 0:98(see A.52), from which we got rank ( B) =K0= 82 . Comparedto the ICA-like results in Figure 2(a)–2(c), the average size of Gabor-like filters in Figure 5(a) isbigger, indicating that the small noise-like local structures in the images have been filtered out.We have also trained our model on 60,000 images of handwritten digits from MNIST dataset (LeCunet al., 1998) and the resultant 400typical optimal filters and bases are shown in Figure 5(c) andFigure 5(d), respectively. All parameters were the same as Figure 5(a) and Figure 5(b): K1= 1024 ,tmax= 100 ,v1= 0:4,= 0:8and= 0:98, from which we got rank ( B) =K0= 183 . Fromthese figures we can see that the salient features of the input images are reflected in these filters andbases. We could also get the similar overcomplete filters and bases by SRBM and MBDL. However,the results depended sensitively on the choice of parameters and the training took a long time.22Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 5: Filters and bases obtained from Olshausen’s image dataset and MNIST dataset by Al-gorithm 2. ( a) and ( b):400typical filters and the corresponding bases obtained from Olshausen’simage dataset, where K0= 82 andK1= 1024 . (c) and ( d):400typical filters and the correspondingbases obtained from the MNIST dataset, where K0= 183 andK1= 1024 .Figure 6 shows that CFE as a function of training time for Alg.2, where Figure 6(a) corresponds toFigure 5(a)-5(b) for learning nature image patches and Figure 6(b) corresponds to Figure 5(c)-5(d)for learning MNIST dataset. We set parameters tmax= 100 and= 0:8for all experiments andvaried parameter v1for each experiment, with v1= 0:2,0:4,0:6or0:8. These results indicate a fastconvergence rate for training on different datasets. Generally, the convergence is insensitive to thechange of parameter v1.We have also performed additional tests on other image datasets and got similar results, confirmingthe speed and robustness of our learning method. Compared with other methods, e.g., IICA, FICA,MBDL, SRBM or sparse autoencoders etc., our method appeared to be more efficient and robust forunsupervised learning of representations. We also found that complete and overovercomplete filtersand bases learned by our methods had local Gabor-like shapes while the results by SRBM or MBDLdid not have this property.23Published as a conference paper at ICLR 2017100101102time (seconds)1.751.81.851.91.95coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8(a)100101102time (seconds)1.61.71.81.922.1coefficient entropy (bits)v1 = 0.2v1 = 0.4v1 = 0.6v1 = 0.8 (b)Figure 6: CFE as a function of training time for Alg.2, with v1= 0:2,0:4,0:6or0:8. In allexperiments parameters were set to tmax= 100 ,t0= 50 and= 0:8. (a): corresponding toFigure 5(a) or Figure 5(b). ( b): corresponding to Figure 5(c) or Figure 5(d).A.5.4 I MAGE DENOISINGSimilar to the sparse coding method applied to image denoising (Elad & Aharon, 2006), our method(see Eq. A.130) can also be applied to image denoising, as shown by an example in Figure 7. Thefilters or bases were learned by using 77image patches sampled from the left half of the image, andsubsequently used to reconstruct the right half of the image which was distorted by Gaussian noise.A common practice for evaluating the results of image denoising is by looking at the differencebetween the reconstruction and the original image. If the reconstruction is perfect the differenceshould look like Gaussian noise. In Figure 7(c) and 7(d) a dictionary ( 100bases) was learned byMBDL and orthogonal matching pursuit was used to estimate the sparse solution.1For our method(shown in Figure 7(b)), we first get the optimal filters parameter W, a low rank matrix ( K0<K ),then from the distorted image patches xm(m= 1;;M) we get filter outputs ym=WTxmand the reconstruction xm=Bym(parameters: = 0:975andK0=K1= 14 ). As can be seenfrom Figure 7, our method worked better than dictionary learning, although we only used 14basescompared with 100bases used by dictionary learning. Our method is also more efficient. We canget better optimal bases Bby a generative model using our infomax approach (details not shown).1Python source code is available at http://scikit-learn.org/stable/ downloads/plot image denoising.py24Published as a conference paper at ICLR 2017(a) (b)(c) (d)Figure 7: Image denoising. ( a): the right half of the original image is distorted by Gaussian noiseand the norm of the difference between the distorted image and the original image is 23:48. (b):image denoising by our method (Algorithm 1), with 14bases used. ( c) and ( d): image denoisingusing dictionary learning, with 100bases used.25
Sy2vE3xNl
ry3iBFqgl
ICLR.cc/2017/conference/-/paper489/official/review
{"title": "Need better human evaluation and comparison with SQuAD", "rating": "6: Marginally above acceptance threshold", "review": "Summary: The paper proposes a novel machine comprehension dataset called NEWSQA. The dataset consists of over 100,000 question answer pairs based on over 10,000 news articles from CNN. The paper analyzes the different types of answers and the different types of reasoning required to answer questions in the dataset. The paper evaluates human performance and the performance of two baselines on the dataset and compares them with the performance on SQuAD dataset. \n\nStrengths:\n\n1. The paper presents a large scale dataset for machine comprehension. \n\n2. The question collection method seems reasonable to collect exploratory questions. Having an answer validation step is desirable.\n\n3. The paper proposes a novel (computationally more efficient) implementation of the match-LSTM model.\n\nWeaknesses:\n\n1. The human evaluation presented in the paper is not satisfactory because the human performance is reported on a very small subset (200 questions). So, it seems unlikely that these 200 questions will provide a reliable measure of the human performance on the entire dataset (which consists of thousands of questions).\n\n2. NEWSQA dataset is very similar to SQuAD dataset in terms of the size of the dataset, the type of dataset -- natural language questions posed by crowdworkers, answers comprising of spans of text from related paragraphs. The paper presents two empirical ways to show that NEWSQA is more challenging than SQuAD -- 1) the gap between human and machine performance in NEWSQA is larger than that in SQuAD. However, since the human performance numbers are reported on very small subset, these trends might not carry over when human performance is computed on all of the dataset.\n2) the sentence-level accuracy on SQuAD is higher than that in NEWSQA. However, as the paper mentions, the differences in accuracies could likely be due to different lengths of documents in the two datasets. So, even this measure does not truly reflect that SQuAD is less challenging than NEWSQA.\nSo, it is not clear if NEWSQA is truly more challenging than SQuAD.\n\n3. Authors mention that BARB is computationally more efficient and faster compared to match-LSTM. However, the paper does not report how much faster BARB is compared to match-LSTM.\n\n4. On page 7, under \"Boundary pointing\" paragraph, the paper should clarify what \"s\" in \"n_s\" refers to.\n\nReview summary: While the dataset collection method seems interesting and promising, I would be more convinced after I see the following --\n1. Human performance on all (or significant percentage of the dataset).\n2. An empirical study that fairly shows that NEWSQA is more challenging (or better in some other way) than SQuAD.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
NEWSQA: A MACHINE COMPREHENSION DATASET
["Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman"]
We present NewsQA, a challenging machine comprehension dataset of over 100,000 question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting in spans of text from the corresponding articles. We collect this dataset through a four- stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (25.3% F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at datasets.maluuba.com/NewsQA.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=ry3iBFqgl
https://openreview.net/pdf?id=ry3iBFqgl
https://openreview.net/forum?id=ry3iBFqgl&noteId=Sy2vE3xNl
Under review as a conference paper at ICLR 2017NEWSQA: A M ACHINE COMPREHENSION DATASETAdam TrischlerTong WangXingdi YuanJustin HarrisAlessandro Sordoni Philip Bachman Kaheer Suleman{adam.trischler, tong.wang, eric.yuan, justin.harris,alessandro.sordoni, phil.bachman, k.suleman}@maluuba.comMaluuba ResearchMontréal, Québec, CanadaABSTRACTWe present NewsQA , a challenging machine comprehension dataset of over 100,000question-answer pairs. Crowdworkers supply questions and answers based on aset of over 10,000 news articles from CNN, with answers consisting in spansof text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. Athorough analysis confirms that NewsQA demands abilities beyond simple wordmatching and recognizing entailment. We measure human performance on thedataset and compare it to several strong neural models. The performance gapbetween humans and machines (0.198 in F1) indicates that significant progress canbe made on NewsQA through future research. The dataset is freely available atdatasets.maluuba.com/NewsQA .1 I NTRODUCTIONAlmost all human knowledge is recorded in the language of text. As such, comprehension of writtenlanguage by machines, at a near-human level, would enable a broad class of artificial intelligenceapplications. In human students we evaluate reading comprehension by posing questions basedon a text passage and then assessing a student’s answers. Such comprehension tests are appealingbecause they are objectively gradable and may measure a range of important abilities, from basicunderstanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines,the research community has taken a similar approach with machine comprehension (MC).Recent years have seen the release of a host of MC datasets. Generally, these consist of (document,question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size,difficulty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most sufferfrom one of two shortcomings: those that are designed explicitly to test comprehension (Richardsonet al., 2013) are too small for training data-intensive deep learning models, while those that aresufficiently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) aregenerated synthetically, yielding questions that are not posed in natural language and that may nottest comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought toovercome these deficiencies with their crowdsourced dataset, SQuAD .Here we present a challenging new largescale dataset for machine comprehension: NewsQA .NewsQAcontains 119,633 natural language questions posed by crowdworkers on 12,744 news articles fromCNN. Answers to these questions consist in spans of text within the corresponding article highlightedby a distinct set of crowdworkers. To build NewsQA we utilized a four-stage collection processdesigned to encourage exploratory, curiosity-based questions that reflect human information seeking.CNN articles were chosen as the source material because they have been used in the past (Hermannet al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume,rapidly changing information sources like news.These three authors contributed equally.1Under review as a conference paper at ICLR 2017As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasetsto be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in linewith Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions thatnecessitates reasoning mechanisms, such as synthesis of information across different parts of anarticle. We designed our collection methodology explicitly to capture such questions.The challenging characteristics of NewsQA that distinguish it from most previous comprehensiontasks are as follows:1. Answers are spans of arbitrary length within an article, rather than single words or entities.2. Some questions have no answer in the corresponding article (the nullspan).3. There are no candidate answers from which to choose.4.Our collection process encourages lexical and syntactic divergence between questions andanswers.5.A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis).In this paper we describe the collection methodology for NewsQA , provide a variety of statistics tocharacterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measurehuman performance and compare it to that of two strong neural-network baselines. Unsurprisingly,humans significantly outperform the models we designed and assessed, achieving an F1 score of0.694 versus 0.496 for the best-performing machine. We hope that this corpus will spur furtheradvances on the challenging task of machine comprehension.2 R ELATED DATASETSNewsQA follows in the tradition of several recent comprehension datasets. These vary in size,difficulty, and collection methodology, and each has its own distinguishing characteristics. We agreewith Bajgar et al. (2016) who have said “models could certainly benefit from as diverse a collectionof datasets as possible.” We discuss this collection below.2.1 MCT ESTMCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level children’sstories with associated questions and answers. The stories are fictional, to ensure that the answer mustbe found in the text itself, and carefully limited to what a young child can understand. Each questioncomes with a set of 4 candidate answers that range from single words to full explanatory sentences.The questions are designed to require rudimentary reasoning and synthesis of information acrosssentences, making the dataset quite challenging. This is compounded by the dataset’s size, whichlimits the training of expressive statistical models. Nevertheless, recent comprehension models haveperformed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structuredneural model (Trischler et al., 2016a). These models all rely on access to the small set of candidateanswers, a crutch that NewsQA does not provide.2.2 CNN/D AILY MAILTheCNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from thoseoutlets with corresponding cloze-style questions. Cloze questions are constructed syntheticallyby deleting a single entity from abstractive summary points that accompany each article (writtenpresumably by human authors). As such, determining the correct answer relies mostly on recognizingtextual entailment between the article and the question. The named entities within an article areidentified and anonymized in a preprocessing step and constitute the set of candidate answers; contrastthis with NewsQA in which answers often include longer phrases and no candidates are given.Because the cloze process is automatic, it is straightforward to collect a significant amount of datato support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answerpairs. However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in2Under review as a conference paper at ICLR 2017fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al.,2016) nearly matches that of humans.2.3 C HILDREN ’SBOOK TESTTheChildren’s Book Test (CBT ) (Hill et al., 2016) was collected using a process similar to that ofCNN/Daily Mail . Text passages are 20-sentence excerpts from children’s books available throughProject Gutenberg; questions are generated by deleting a single word in the next ( i.e., 21st) sentence.Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar ascomprehension is likely necessary for this prediction, but comprehension may be insufficient andother mechanisms may be more important.2.4 B OOK TESTBajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we haveyet to reach the full capacity of existing comprehension models. As a remedy they present BookTest .This is an extension to the named-entity and common-noun strata of CBT that increases their sizeby over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields amodel (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggeststhat much is to be gained from more data, but we repeat our concerns about the relevance of storyprediction as a comprehension task. We also wish to encourage more efficient learning from less data.2.5 SQ UADThe comprehension dataset most closely related to NewsQA isSQuAD (Rajpurkar et al., 2016). Itconsists of natural language questions posed by crowdworkers on paragraphs from high-PageRankWikipedia articles. As in NewsQA , each answer consists of a span of text from the related paragraphand no candidates are provided. Despite the effort of manual labelling, SQuAD ’s size is significantand amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.SQuAD is a challenging comprehension task in which humans far outperform machines. Theauthors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a differentmethodology), whereas at the time of the writing, the strongest published model to date achieves only0.700 in F1 (Wang & Jiang, 2016b).3 C OLLECTION METHODOLOGYWe collected NewsQA through a four-stage process: article curation, question sourcing, answersourcing, and validation. We also applied a post-processing step with answer agreement consolidationand span merging to enhance the usability of the dataset.3.1 A RTICLE CURATIONWe retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/DailyMail . From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover awide range of topics that includes politics, economics, and current events. Articles are partitioned atrandom into a training set (90%), a development set (5%), and a test set (5%).3.2 Q UESTION SOURCINGIt was important to us to collect challenging questions that could not be answered using straightforwardword- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning incomprehension models. We are also interested in questions that, in some sense, model humancuriosity and reflect actual human use-cases of information seeking. Along a similar line, we considerit an important (though as yet overlooked) capacity of a comprehension model to recognize whengiven information is inadequate, so we are also interested in questions that may not have sufficientevidence in the text. Our question sourcing stage was designed to solicit questions of this nature, anddeliberately separated from the answer sourcing stage for the same reason.3Under review as a conference paper at ICLR 2017Questioners (a distinct set of crowdworkers) see only a news article’s headline and its summarypoints (also available from CNN); they do not see the full article itself. They are asked to formulatea question from this incomplete information. This encourages curiosity about the contents of thefull article and prevents questions that are simple reformulations of sentences in the text. It alsoincreases the likelihood of questions whose answers do not exist in the text. We reject questions thathave significant word overlap with the summary points to ensure that crowdworkers do not treat thesummaries as mini-articles, and further discouraged this in the instructions. During collection eachQuestioner is solicited for up to three questions about an article. They are provided with positive andnegative examples to prompt and guide them (detailed instructions are shown in Figure 3).3.3 A NSWER SOURCINGA second set of crowdworkers ( Answerers ) provide answers. Although this separation of questionand answer increases the overall cognitive load, we hypothesized that unburdening Questioners inthis way would encourage more complex questions. Answerers receive a full article along with acrowdsourced question and are tasked with determining the answer. They may also reject the questionas nonsensical, or select the nullanswer if the article contains insufficient information. Answers aresubmitted by clicking on and highlighting words in the article while instructions encourage the setof answer words to consist in a single continuous span (again, we give an example prompt in theAppendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with theaim of achieving agreement between at least two Answerers.3.4 V ALIDATIONCrowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested ormalicious workers). To obtain a dataset of the highest possible quality we use a validation processthat mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, aquestion, and the set of unique answers to that question. We task these workers with choosing thebest answer from the candidate set or rejecting all answers. Each article-question pair is validated byan average of 2.48 crowdworkers. Validation was used on those questions without answer-agreementafter the previous stage, amounting to 43.2% of all questions.3.5 A NSWER MARKING AND CLEANUPAfter validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separatecrowdworkers—either at the initial answer sourcing stage or in the top-answer selection. Thisimproves the dataset’s quality. We choose to include the questions without agreed answers in thecorpus also, but they are specially marked. Such questions could be treated as having the nullanswerand used to train models that are aware of poorly posed questions.As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation isdiscounted). We find that 5.68% of answers consist in multiple spans, while 71.3% of multi-spans arewithin the 3-word threshold. Looking more closely at the data reveals that the multi-span answersoften represent lists. These may present an interesting challenge for comprehension models movingforward.4 D ATA ANALYSISWe provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as amachine comprehension benchmark. The analysis focuses on the types of answers that appear in thedataset and the various forms of reasoning required to solve it.14.1 A NSWER TYPESFollowing Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1).This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER1Additional statistics are available at http://datasets.maluuba.com/NewsQA/stats .4Under review as a conference paper at ICLR 2017Table 1: The variety of answer types appearing in NewsQA , with proportion statistics and examples.Answer type Example Proportion (%)Date/Time March 12, 2008 2.9Numeric 24.3 million 9.8Person Ludwig van Beethoven 14.8Location Torrance, California 7.8Other Entity Pew Hispanic Center 5.8Common Noun Phrase federal prosecutors 22.2Adjective Phrase 5-hour 1.9Verb Phrase suffered minor damage 1.4Clause Phrase trampling on human rights 18.3Prepositional Phrase in the attack 3.8Other nearly half 11.2tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that themajority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spreadamong the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly,answers in NewsQA are linguistically diverse.The proportions in Table 1 only account for cases when an answer span exists. The complement ofthis set comprises questions with an agreed nullanswer (9.5% of the full corpus) and answers withoutagreement after validation (4.5% of the full corpus).4.2 R EASONING TYPESThe forms of reasoning required to solve NewsQA directly influence the abilities that models willlearn from the dataset. We stratified reasoning types using a variation on the taxonomy presentedby Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, inascending order of difficulty:1.Word Matching: Important words in the question exactly match words in the immediatecontext of an answer span such that a keyword search algorithm could perform well on thissubset.2.Paraphrasing: A single sentence in the article entails or paraphrases the question. Para-phrase recognition may require synonymy and word knowledge.3.Inference: The answer must be inferred from incomplete information in the article or byrecognizing conceptual overlap. This typically draws on world knowledge.4.Synthesis: The answer can only be inferred by synthesizing information distributed acrossmultiple sentences.5.Ambiguous/Insufficient: The question has no answer or no unique answer in the article.For both NewsQA andSQuAD , we manually labelled 1,000 examples (drawn randomly from therespective development sets) according to these types and compiled the results in Table 2. Someexamples fall into more than one category, in which case we defaulted to the more challengingtype. We can see from the table that word matching, the easiest type, makes up the largest subsetin both datasets (32.7% for NewsQA and 39.8% for SQuAD ). Paraphrasing constitutes a muchlarger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicitencouragement of lexical variety in SQuAD question sourcing. However, NewsQA significantlyoutnumbers SQuAD on the distribution of the more difficult forms of reasoning: synthesis andinference make up 33.9% of the data in contrast to 20.5% in SQuAD .5 B ASELINE MODELSWe test the performance of three comprehension systems on NewsQA : human data analysts andtwo neural models. The first neural model is the match-LSTM (mLSTM) system of Wang & Jiang5Under review as a conference paper at ICLR 2017Table 2: Reasoning mechanisms needed to answer questions. For each we show an example questionwith the sentence that contains the answer span, with words relevant to the reasoning type in bold,and the corresponding proportion in the human-evaluated subset of both NewsQA andSQuAD (1,000samples each).Reasoning ExampleProportion (%)NewsQA SQuADWord Matching Q: When were thefindings published ?S: Both sets of research findings were published Thursday ...32.7 39.8Paraphrasing Q: Who is the struggle between in Rwanda?S: The struggle pits ethnic Tutsis , supported by Rwanda, against ethnic Hutu , backed by Congo.27.0 34.3Inference Q: Who drew inspiration from presidents ?S:Rudy Ruiz says the lives of US presidents can make them positive role models for students.13.2 8.6Synthesis Q: Where isBrittanee Drexel from?S: The mother of a 17-year-old Rochester ,New York high school student ... says she did not give herdaughter permission to go on the trip. Brittanee Marie Drexel ’s mom says...20.7 11.9Ambiguous/Insufficient Q: Whose mother ismoving to the White House?S: ... Barack Obama’s mother-in-law , Marian Robinson, will join the Obamas at the family’s privatequarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]6.4 5.4(2016b). The second is a model of our own design that is computationally cheaper. We describe thesemodels below but omit the personal details of our analysts. Implementation details of the models aredescribed in Appendix A.5.1 M ATCH -LSTMThere are three stages involved in the mLSTM model. First, LSTM networks encode the documentand question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences ofhidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodingswith the question encodings. This network processes the document sequentially and at each tokenuses an attention mechanism to obtain a weighted vector representation of the question; the weightedcombination is concatenated with the encoding of the current token and fed into a standard LSTM.Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of theanswer span. We refer the reader to Wang & Jiang (2016a;b) for full details. At the time of writing,mLSTM is state-of-the-art on SQuAD (see Table 3) so it is natural to test it further on NewsQA .5.2 T HEBILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) M ODELThe match-LSTM is computationally intensive since it computes an attention over the entire questionat each document token in the recurrence. To facilitate faster experimentation with NewsQA wedeveloped a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our modelconsists in four stages:Encoding All words in the document and question are mapped to real-valued vectors using theGloVe embedding matrix W2RjVjd. This yields d1; : : : ;dn2Rdandq1; : : : ;qm2Rd.A bidirectional GRU network (Bahdanau et al., 2015) takes in diand encodes contextual stateshi2RD1for the document. The same encoder is applied to qjto derive contextual states kj2RD1for the question.3Bilinear Annotation Next we compare the document and question encodings using a set of Cbilinear transformations,gij=hTiT[1:C]kj;Tc2RD1D1;gij2RC;which we use to produce an (nmC)-dimensional tensor of annotation scores, G= [gij]. Wetake the maximum over the question-token (second) dimension and call the columns of the resulting2With the configurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about3.9k seconds for BARB and 8.1k seconds for mLSTM .3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions.Each of these has hidden size12D1.6Under review as a conference paper at ICLR 2017matrix gi2RC. We use this matrix as an annotation over the document word dimension. Contrastingthe multiplicative application of attention vectors, this annotation matrix is to be concatenated to theencoder RNN input in the re-encoding stage.Re-encoding For each document word, the input of the re-encoding RNN (another biGRU network)consists of three components: the document encodings hi, the annotation vectors gi, and a binaryfeature qiindicating whether the document word appears in the question. The resulting vectorsfi= [hi;gi;qi]are fed into the re-encoding RNN to produce D2-dimensional encodings eias inputin the boundary-pointing stage.Boundary pointing Finally, we search for the boundaries of the answer span using a convolutionalnetwork (in a process similar to edge detection). Encodings eiare arranged in matrix E2RD2n.Eis convolved with a bank of nffilters, F`k2RD2w, where wis the filter width, kindexes thedifferent filters, and `indexes the layer of the convolutional network. Each layer has the same numberof filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) followingeach convolution, with the result an (nfn)-dimensional matrix B`.We use two convolutional layers in the boundary-pointing stage. Given B1andB2, the answerspan’s start- and end-location probabilities are computed using p(s)/expvTsB1+bsandp(e)/expvTeB2+be, respectively. We also concatenate p(s)to the input of the second convolutionallayer (along the nf-dimension) so as to condition the end-boundary pointing on the start-boundary.Vectors vs,ve2Rnfand scalars bs,be2Rare trainable parameters.We also provide an intermediate level of “guidance” to the annotation mechanism by first reducingthe feature dimension CinGwith mean-pooling, then maximizing the softmax probabilities in theresulting ( n-dimensional) vector corresponding to the answer word positions in each document. Thisauxiliary task is observed empirically to improve performance.6 E XPERIMENTS46.1 H UMAN EVALUATIONWe tested four English speakers (three native and one near-native) on a total of 1,000 questions fromtheNewsQA development set. As given in Table 3, they averaged 0.694 in F1, which likely representsa ceiling for machine performance. Our students’ exact match (EM) scores are relatively low at 0.465.This is because in many cases there are multiple ways to select semantically equivalent answers, e.g.,“1996” versus “in 1996”. We also compared human performance on the answers that had agreementwith and without validation, finding a difference of only 1.4 percentage points F1. This suggests ourvalidation stage yields good-quality answers.The original SQuAD evaluation of human performance compares separate answers given by crowd-workers; for a closer comparison with NewsQA , we replicated our human test on the same numberof validation data (1,000). We measured their answers against the second group of crowdsourcedresponses in SQuAD ’s development set, as in Rajpurkar et al. (2016). Our students scored 0.807 inF1.6.2 M ODEL PERFORMANCEPerformance of the baseline models and humans is measured by EM and F1 with the official evaluationscript from SQuAD and listed in Table 3. Unless otherwise stated, hyperparameters are determinedbyhyperopt (Appendix A). The gap between human and machine performance on NewsQA isa striking 0.198 points F1 — much larger than the gap on SQuAD (0.098) under the same humanevaluation scheme. The gaps suggest a large margin for improvement with automated methods.Figure 1 stratifies model (BARB) performance according to answer type (left) and reasoning type(right) as defined in Sections 4.1 and 4.2, respectively. The answer-type stratification suggests that4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samplesfor training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerablequestions for future work.7Under review as a conference paper at ICLR 2017Table 3: Performance of several methods and humans on the SQuAD andNewsQA datasets. Su-perscript 1 indicates the results are taken from Rajpurkar et al. (2016), and 2 from Wang & Jiang(2016b).SQuAD Exact Match F1Model Dev Test Dev TestRandom10.11 0.13 0.41 0.43mLSTM20.591 0.595 0.700 0.703BARB 0.591 - 0.709 -Human10.803 0.770 0.905 0.868Human (ours) 0.650 - 0.807 -NewsQA Exact Match F1Model Dev Test Dev TestRandom 0.00 0.00 0.30 0.30mLSTM 0.344 0.349 0.496 0.500BARB 0.361 0.341 0.496 0.482Human 0.465 - 0.694 -Date/timeNumericPersonAdjective PhraseLocationPrepositional PhraseCommon Noun PhraseOtherOther entityClause PhraseVerb Phrase00.20.40.60.8F1EMWord MatchingParaphrasingInferenceSynthesisAmbiguous/ Insufficient0.0000.1500.3000.4500.6000.7500.900NewsQASQuADFigure 1: Left: BARB performance (F1 and EM) stratified by answer type on the full developmentset of NewsQA .Right : BARB performance (F1) stratified by reasoning type on the human-assessedsubset on both NewsQA andSQuAD . Error bars indicate performance differences between BARBand human annotators.the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring inference andsynthesis are,not surprisingly, more difficult for the model. Consistent with observations in Table 3, stratifiedperformance on NewsQA is significantly lower than on SQuAD . The difference is smallest on wordmatching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizinginformation from separate sentences more difficult, since the relevant sentences may be farther apart.This requires the model to track longer-term dependencies.6.3 S ENTENCE -LEVEL SCORINGWe propose a simple sentence-level subtask as an additional quantitative demonstration of the relativedifficulty of NewsQA . Given a document and a question, the goal is to find the sentence containingthe answer span. We hypothesize that simple techniques like word-matching are inadequate to thistask owing to the more involved reasoning required by NewsQA .We employ a technique that resembles inverse document frequency ( idf), which we call inversesentence frequency ( isf). Given a sentence Sifrom an article and its corresponding question Q, theisfscore is given by the sum of the idfscores of the words common to SiandQ(each sentence istreated as a document for the idfcomputation). The sentence with the highest isfis taken as theanswer sentenceS, that is,S= arg maxiXw2Si\Qisf(w):Theisfmethod achieves an impressive 79.4% sentence-level accuracy on SQuAD ’s development setbut only 35.4% accuracy on NewsQA ’s development set, highlighting the comparative difficulty of thelatter. To eliminate the difference in article length as a possible cause of the performance difference,we also artificially increased the article lengths in SQuAD by concatenating adjacent SQuAD articlesfrom the same Wikipedia document. Accuracy decreases as expected with the increased SQuADarticle length, yet remains significantly higher than that on NewsQA with comparable or even largerarticle length (Table 4).8Under review as a conference paper at ICLR 2017Table 4: Sentence-level accuracy on artificially-lengthened SQuAD documents.SQuAD NewsQA# documents 1 3 5 7 9 1Avg # sentences 4.9 14.3 23.2 31.8 40.3 30.7isf 79.6 74.9 73.0 72.3 71.0 35.47 C ONCLUSIONWe have introduced a challenging new comprehension dataset: NewsQA . We collected the 100,000+examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights,posed questions about them, and determined answers. Our methodology yields diverse answer typesand a significant proportion of questions that require some reasoning ability to solve. This makesthe corpus challenging, as confirmed by the large performance gap between humans and deep neuralmodels (0.198 in F1). By its size and complexity, NewsQA makes a significant extension to theexisting body of comprehension datasets. We hope that our corpus will spur further advances inmachine comprehension and guide the development of literate artificial intelligence.ACKNOWLEDGMENTSThe authors would like to thank Ça ̆glar Gülçehre, Sandeep Subramanian and Saizheng Zhang forhelpful discussions, and Pranav Subramani for the graphs.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. ICLR , 2015.Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset forreading comprehension. arXiv preprint arXiv:1610.00956 , 2016.J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y . Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy ,2010.Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / dailymail reading comprehension task. In Association for Computational Linguistics (ACL) , 2016.François Chollet. keras. https://github.com/fchollet/keras , 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In Aistats , volume 9, pp. 249–256, 2010.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in NeuralInformation Processing Systems , pp. 1684–1692, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. ICLR , 2016.Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. arXiv preprint arXiv:1603.01547 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR , 2015.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neuralnetworks. ICML (3) , 28:1310–1318, 2013.Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pp. 1532–43, 2014.9Under review as a conference paper at ICLR 2017Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions formachine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for theopen-domain machine comprehension of text. In EMNLP , volume 1, pp. 2, 2013.Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailingstructures for machine comprehension. In Proceedings of ACL , 2015.Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamicsof learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention formachine reading. arXiv preprint arXiv:1606.02245 , 2016.Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th AnnualMeeting of the Association for Computational Linguistics , 2016a.Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehensionwith the epireader. In EMNLP , 2016b.Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax,frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers , pp. 700, 2015.Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL , 2016a.Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXivpreprint arXiv:1608.07905 , 2016b.10Under review as a conference paper at ICLR 2017APPENDICESA I MPLEMENTATION DETAILSBoth mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using theTheano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors(Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddingsare not updated during training. Embeddings for out-of-vocabulary words are initialized with zero.For both models, the training objective is to maximize the log likelihood of the boundary pointers.Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAMoptimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB.The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of eachepoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5.Parameter tuning is performed on both models using hyperopt5. For each model, configurationsfor the best observed performance are as follows:mLSTMBoth the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hiddensize of 192. These settings are consistent with those used by Wang & Jiang (2016b).Model parameters are initialized with either the normal distribution ( N(0;0:05)) or the orthogonalinitialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O.In the Match-LSTM layer, Wq,Wp, andWrare initialized with O,bpandware initialized with N,andbis initialized as 1.In the answer-pointing layer, VandWaare initialized with O,baandvare initialized with N, andcis initialized as 1.BARBFor BARB, the following hyperparameters are used on both SQuAD andNewsQA :d= 300 ,D1=128,C= 64 ,D2= 256 ,w= 3, andnf= 128 . Weight matrices in the GRU, the bilinear models, aswell as the boundary decoder ( vsandve) are initialized with O. The filter weights in the boundarydecoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinearbiases are initialized with N, and the boundary decoder biases are initialized with 0.B D ATA COLLECTION USER INTERFACEHere we present the user interfaces used in question sourcing, answer sourcing, and question/answervalidation.5https://github.com/hyperopt/hyperopt11Under review as a conference paper at ICLR 2017Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation.12Under review as a conference paper at ICLR 2017Figure 3: Question sourcing instructions for the crowdworkers.13
B11QX774e
ry3iBFqgl
ICLR.cc/2017/conference/-/paper489/official/review
{"title": "potentially great dataset with some flaws", "rating": "6: Marginally above acceptance threshold", "review": "It would seem that the shelf life of a dataset has decreased rapidly in recent literature. SQuAD dataset has been heavily pursued as soon as it hit online couple months ago, the best performance on their leaderboard now reaching to 82%. This is rather surprising when taking into account the fact that the formal conference presentation of the dataset took place only a month ago at EMNLP\u201916, and that the reported machine performance (at the time of paper submission) was only at 51%. One reasonable speculation is that the dataset may have not been hard enough.\n\nNewsQA, the paper in submission, aims to address this concern by presenting a dataset of a comparable scale created through different QA collection strategies. Most notably, the authors solicit questions without requiring answers from the same turkers, in order to promote more diverse and hard-to-answer questions. Another notable difference is that the questions are gathered without showing the content of the news articles, and the dataset makes use of a bigger subset of CNN/Daily corpus (12K / 90K), as opposed to a much smaller subset (500 / 90K) used by SQuAD.\n\nIn sum, I think NewsQA dataset presents an effort to construct a harder, large-scale reading comprehension challenge, a recently hot research topic for which we don\u2019t yet have satisfying datasets. While not without its own weaknesses, I think this dataset presents potential values compared to what are available out there today.\n\nThat said, the paper does read like it was prepared in a hurry, as there are numerous small things that the authors could have done better. As a result, I do wonder about the quality of the dataset. For one, human performance of SQuAD measured by the authors (70.5 - 82%) is lower than that reported by SQuAD (80.3 - 90.5%). I think this sort of difference can easily happen depending on the level of carefulness the annotators can maintain. After all, not all humans have the same level of carefulness or even the same level of reading comprehension. I think it\u2019d be the best if the authors can try to explain the reason behind these differences, and if possible, perform a more careful measurement of human performance. If anything, I don\u2019t think it looks favorable for NewsQA if the human performance is only at the level of 74.9%, as it looks as if the difficulty of the dataset comes mainly from the potential noise from the QA collection process, which implies that the low model performance could result from not necessarily because of the difficulty of the comprehension and reasoning, but because of incorrect answers given by human annotators.\n\nI\u2019m also not sure whether the design choice of not presenting the news article when soliciting the questions was a good one. I can imagine that people might end up asking similar generic questions when not enough context has been presented. Perhaps taking a hybrid, what I would like to suggest is to present news articles where some sentences or phrases are randomly redacted, so that the question generators can have a bit more context while not having the full material in front of them.\n\nYet another way of encouraging the turkers from asking too trivial questions is to engage an automatic QA system on the fly \u2014 turkers must construct a QA pair for which an existing state-of-the-art system cannot answer correctly.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
NEWSQA: A MACHINE COMPREHENSION DATASET
["Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman"]
We present NewsQA, a challenging machine comprehension dataset of over 100,000 question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting in spans of text from the corresponding articles. We collect this dataset through a four- stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (25.3% F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at datasets.maluuba.com/NewsQA.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=ry3iBFqgl
https://openreview.net/pdf?id=ry3iBFqgl
https://openreview.net/forum?id=ry3iBFqgl&noteId=B11QX774e
Under review as a conference paper at ICLR 2017NEWSQA: A M ACHINE COMPREHENSION DATASETAdam TrischlerTong WangXingdi YuanJustin HarrisAlessandro Sordoni Philip Bachman Kaheer Suleman{adam.trischler, tong.wang, eric.yuan, justin.harris,alessandro.sordoni, phil.bachman, k.suleman}@maluuba.comMaluuba ResearchMontréal, Québec, CanadaABSTRACTWe present NewsQA , a challenging machine comprehension dataset of over 100,000question-answer pairs. Crowdworkers supply questions and answers based on aset of over 10,000 news articles from CNN, with answers consisting in spansof text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. Athorough analysis confirms that NewsQA demands abilities beyond simple wordmatching and recognizing entailment. We measure human performance on thedataset and compare it to several strong neural models. The performance gapbetween humans and machines (0.198 in F1) indicates that significant progress canbe made on NewsQA through future research. The dataset is freely available atdatasets.maluuba.com/NewsQA .1 I NTRODUCTIONAlmost all human knowledge is recorded in the language of text. As such, comprehension of writtenlanguage by machines, at a near-human level, would enable a broad class of artificial intelligenceapplications. In human students we evaluate reading comprehension by posing questions basedon a text passage and then assessing a student’s answers. Such comprehension tests are appealingbecause they are objectively gradable and may measure a range of important abilities, from basicunderstanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines,the research community has taken a similar approach with machine comprehension (MC).Recent years have seen the release of a host of MC datasets. Generally, these consist of (document,question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size,difficulty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most sufferfrom one of two shortcomings: those that are designed explicitly to test comprehension (Richardsonet al., 2013) are too small for training data-intensive deep learning models, while those that aresufficiently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) aregenerated synthetically, yielding questions that are not posed in natural language and that may nottest comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought toovercome these deficiencies with their crowdsourced dataset, SQuAD .Here we present a challenging new largescale dataset for machine comprehension: NewsQA .NewsQAcontains 119,633 natural language questions posed by crowdworkers on 12,744 news articles fromCNN. Answers to these questions consist in spans of text within the corresponding article highlightedby a distinct set of crowdworkers. To build NewsQA we utilized a four-stage collection processdesigned to encourage exploratory, curiosity-based questions that reflect human information seeking.CNN articles were chosen as the source material because they have been used in the past (Hermannet al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume,rapidly changing information sources like news.These three authors contributed equally.1Under review as a conference paper at ICLR 2017As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasetsto be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in linewith Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions thatnecessitates reasoning mechanisms, such as synthesis of information across different parts of anarticle. We designed our collection methodology explicitly to capture such questions.The challenging characteristics of NewsQA that distinguish it from most previous comprehensiontasks are as follows:1. Answers are spans of arbitrary length within an article, rather than single words or entities.2. Some questions have no answer in the corresponding article (the nullspan).3. There are no candidate answers from which to choose.4.Our collection process encourages lexical and syntactic divergence between questions andanswers.5.A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis).In this paper we describe the collection methodology for NewsQA , provide a variety of statistics tocharacterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measurehuman performance and compare it to that of two strong neural-network baselines. Unsurprisingly,humans significantly outperform the models we designed and assessed, achieving an F1 score of0.694 versus 0.496 for the best-performing machine. We hope that this corpus will spur furtheradvances on the challenging task of machine comprehension.2 R ELATED DATASETSNewsQA follows in the tradition of several recent comprehension datasets. These vary in size,difficulty, and collection methodology, and each has its own distinguishing characteristics. We agreewith Bajgar et al. (2016) who have said “models could certainly benefit from as diverse a collectionof datasets as possible.” We discuss this collection below.2.1 MCT ESTMCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level children’sstories with associated questions and answers. The stories are fictional, to ensure that the answer mustbe found in the text itself, and carefully limited to what a young child can understand. Each questioncomes with a set of 4 candidate answers that range from single words to full explanatory sentences.The questions are designed to require rudimentary reasoning and synthesis of information acrosssentences, making the dataset quite challenging. This is compounded by the dataset’s size, whichlimits the training of expressive statistical models. Nevertheless, recent comprehension models haveperformed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structuredneural model (Trischler et al., 2016a). These models all rely on access to the small set of candidateanswers, a crutch that NewsQA does not provide.2.2 CNN/D AILY MAILTheCNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from thoseoutlets with corresponding cloze-style questions. Cloze questions are constructed syntheticallyby deleting a single entity from abstractive summary points that accompany each article (writtenpresumably by human authors). As such, determining the correct answer relies mostly on recognizingtextual entailment between the article and the question. The named entities within an article areidentified and anonymized in a preprocessing step and constitute the set of candidate answers; contrastthis with NewsQA in which answers often include longer phrases and no candidates are given.Because the cloze process is automatic, it is straightforward to collect a significant amount of datato support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answerpairs. However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in2Under review as a conference paper at ICLR 2017fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al.,2016) nearly matches that of humans.2.3 C HILDREN ’SBOOK TESTTheChildren’s Book Test (CBT ) (Hill et al., 2016) was collected using a process similar to that ofCNN/Daily Mail . Text passages are 20-sentence excerpts from children’s books available throughProject Gutenberg; questions are generated by deleting a single word in the next ( i.e., 21st) sentence.Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar ascomprehension is likely necessary for this prediction, but comprehension may be insufficient andother mechanisms may be more important.2.4 B OOK TESTBajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we haveyet to reach the full capacity of existing comprehension models. As a remedy they present BookTest .This is an extension to the named-entity and common-noun strata of CBT that increases their sizeby over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields amodel (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggeststhat much is to be gained from more data, but we repeat our concerns about the relevance of storyprediction as a comprehension task. We also wish to encourage more efficient learning from less data.2.5 SQ UADThe comprehension dataset most closely related to NewsQA isSQuAD (Rajpurkar et al., 2016). Itconsists of natural language questions posed by crowdworkers on paragraphs from high-PageRankWikipedia articles. As in NewsQA , each answer consists of a span of text from the related paragraphand no candidates are provided. Despite the effort of manual labelling, SQuAD ’s size is significantand amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.SQuAD is a challenging comprehension task in which humans far outperform machines. Theauthors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a differentmethodology), whereas at the time of the writing, the strongest published model to date achieves only0.700 in F1 (Wang & Jiang, 2016b).3 C OLLECTION METHODOLOGYWe collected NewsQA through a four-stage process: article curation, question sourcing, answersourcing, and validation. We also applied a post-processing step with answer agreement consolidationand span merging to enhance the usability of the dataset.3.1 A RTICLE CURATIONWe retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/DailyMail . From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover awide range of topics that includes politics, economics, and current events. Articles are partitioned atrandom into a training set (90%), a development set (5%), and a test set (5%).3.2 Q UESTION SOURCINGIt was important to us to collect challenging questions that could not be answered using straightforwardword- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning incomprehension models. We are also interested in questions that, in some sense, model humancuriosity and reflect actual human use-cases of information seeking. Along a similar line, we considerit an important (though as yet overlooked) capacity of a comprehension model to recognize whengiven information is inadequate, so we are also interested in questions that may not have sufficientevidence in the text. Our question sourcing stage was designed to solicit questions of this nature, anddeliberately separated from the answer sourcing stage for the same reason.3Under review as a conference paper at ICLR 2017Questioners (a distinct set of crowdworkers) see only a news article’s headline and its summarypoints (also available from CNN); they do not see the full article itself. They are asked to formulatea question from this incomplete information. This encourages curiosity about the contents of thefull article and prevents questions that are simple reformulations of sentences in the text. It alsoincreases the likelihood of questions whose answers do not exist in the text. We reject questions thathave significant word overlap with the summary points to ensure that crowdworkers do not treat thesummaries as mini-articles, and further discouraged this in the instructions. During collection eachQuestioner is solicited for up to three questions about an article. They are provided with positive andnegative examples to prompt and guide them (detailed instructions are shown in Figure 3).3.3 A NSWER SOURCINGA second set of crowdworkers ( Answerers ) provide answers. Although this separation of questionand answer increases the overall cognitive load, we hypothesized that unburdening Questioners inthis way would encourage more complex questions. Answerers receive a full article along with acrowdsourced question and are tasked with determining the answer. They may also reject the questionas nonsensical, or select the nullanswer if the article contains insufficient information. Answers aresubmitted by clicking on and highlighting words in the article while instructions encourage the setof answer words to consist in a single continuous span (again, we give an example prompt in theAppendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with theaim of achieving agreement between at least two Answerers.3.4 V ALIDATIONCrowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested ormalicious workers). To obtain a dataset of the highest possible quality we use a validation processthat mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, aquestion, and the set of unique answers to that question. We task these workers with choosing thebest answer from the candidate set or rejecting all answers. Each article-question pair is validated byan average of 2.48 crowdworkers. Validation was used on those questions without answer-agreementafter the previous stage, amounting to 43.2% of all questions.3.5 A NSWER MARKING AND CLEANUPAfter validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separatecrowdworkers—either at the initial answer sourcing stage or in the top-answer selection. Thisimproves the dataset’s quality. We choose to include the questions without agreed answers in thecorpus also, but they are specially marked. Such questions could be treated as having the nullanswerand used to train models that are aware of poorly posed questions.As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation isdiscounted). We find that 5.68% of answers consist in multiple spans, while 71.3% of multi-spans arewithin the 3-word threshold. Looking more closely at the data reveals that the multi-span answersoften represent lists. These may present an interesting challenge for comprehension models movingforward.4 D ATA ANALYSISWe provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as amachine comprehension benchmark. The analysis focuses on the types of answers that appear in thedataset and the various forms of reasoning required to solve it.14.1 A NSWER TYPESFollowing Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1).This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER1Additional statistics are available at http://datasets.maluuba.com/NewsQA/stats .4Under review as a conference paper at ICLR 2017Table 1: The variety of answer types appearing in NewsQA , with proportion statistics and examples.Answer type Example Proportion (%)Date/Time March 12, 2008 2.9Numeric 24.3 million 9.8Person Ludwig van Beethoven 14.8Location Torrance, California 7.8Other Entity Pew Hispanic Center 5.8Common Noun Phrase federal prosecutors 22.2Adjective Phrase 5-hour 1.9Verb Phrase suffered minor damage 1.4Clause Phrase trampling on human rights 18.3Prepositional Phrase in the attack 3.8Other nearly half 11.2tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that themajority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spreadamong the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly,answers in NewsQA are linguistically diverse.The proportions in Table 1 only account for cases when an answer span exists. The complement ofthis set comprises questions with an agreed nullanswer (9.5% of the full corpus) and answers withoutagreement after validation (4.5% of the full corpus).4.2 R EASONING TYPESThe forms of reasoning required to solve NewsQA directly influence the abilities that models willlearn from the dataset. We stratified reasoning types using a variation on the taxonomy presentedby Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, inascending order of difficulty:1.Word Matching: Important words in the question exactly match words in the immediatecontext of an answer span such that a keyword search algorithm could perform well on thissubset.2.Paraphrasing: A single sentence in the article entails or paraphrases the question. Para-phrase recognition may require synonymy and word knowledge.3.Inference: The answer must be inferred from incomplete information in the article or byrecognizing conceptual overlap. This typically draws on world knowledge.4.Synthesis: The answer can only be inferred by synthesizing information distributed acrossmultiple sentences.5.Ambiguous/Insufficient: The question has no answer or no unique answer in the article.For both NewsQA andSQuAD , we manually labelled 1,000 examples (drawn randomly from therespective development sets) according to these types and compiled the results in Table 2. Someexamples fall into more than one category, in which case we defaulted to the more challengingtype. We can see from the table that word matching, the easiest type, makes up the largest subsetin both datasets (32.7% for NewsQA and 39.8% for SQuAD ). Paraphrasing constitutes a muchlarger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicitencouragement of lexical variety in SQuAD question sourcing. However, NewsQA significantlyoutnumbers SQuAD on the distribution of the more difficult forms of reasoning: synthesis andinference make up 33.9% of the data in contrast to 20.5% in SQuAD .5 B ASELINE MODELSWe test the performance of three comprehension systems on NewsQA : human data analysts andtwo neural models. The first neural model is the match-LSTM (mLSTM) system of Wang & Jiang5Under review as a conference paper at ICLR 2017Table 2: Reasoning mechanisms needed to answer questions. For each we show an example questionwith the sentence that contains the answer span, with words relevant to the reasoning type in bold,and the corresponding proportion in the human-evaluated subset of both NewsQA andSQuAD (1,000samples each).Reasoning ExampleProportion (%)NewsQA SQuADWord Matching Q: When were thefindings published ?S: Both sets of research findings were published Thursday ...32.7 39.8Paraphrasing Q: Who is the struggle between in Rwanda?S: The struggle pits ethnic Tutsis , supported by Rwanda, against ethnic Hutu , backed by Congo.27.0 34.3Inference Q: Who drew inspiration from presidents ?S:Rudy Ruiz says the lives of US presidents can make them positive role models for students.13.2 8.6Synthesis Q: Where isBrittanee Drexel from?S: The mother of a 17-year-old Rochester ,New York high school student ... says she did not give herdaughter permission to go on the trip. Brittanee Marie Drexel ’s mom says...20.7 11.9Ambiguous/Insufficient Q: Whose mother ismoving to the White House?S: ... Barack Obama’s mother-in-law , Marian Robinson, will join the Obamas at the family’s privatequarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]6.4 5.4(2016b). The second is a model of our own design that is computationally cheaper. We describe thesemodels below but omit the personal details of our analysts. Implementation details of the models aredescribed in Appendix A.5.1 M ATCH -LSTMThere are three stages involved in the mLSTM model. First, LSTM networks encode the documentand question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences ofhidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodingswith the question encodings. This network processes the document sequentially and at each tokenuses an attention mechanism to obtain a weighted vector representation of the question; the weightedcombination is concatenated with the encoding of the current token and fed into a standard LSTM.Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of theanswer span. We refer the reader to Wang & Jiang (2016a;b) for full details. At the time of writing,mLSTM is state-of-the-art on SQuAD (see Table 3) so it is natural to test it further on NewsQA .5.2 T HEBILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) M ODELThe match-LSTM is computationally intensive since it computes an attention over the entire questionat each document token in the recurrence. To facilitate faster experimentation with NewsQA wedeveloped a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our modelconsists in four stages:Encoding All words in the document and question are mapped to real-valued vectors using theGloVe embedding matrix W2RjVjd. This yields d1; : : : ;dn2Rdandq1; : : : ;qm2Rd.A bidirectional GRU network (Bahdanau et al., 2015) takes in diand encodes contextual stateshi2RD1for the document. The same encoder is applied to qjto derive contextual states kj2RD1for the question.3Bilinear Annotation Next we compare the document and question encodings using a set of Cbilinear transformations,gij=hTiT[1:C]kj;Tc2RD1D1;gij2RC;which we use to produce an (nmC)-dimensional tensor of annotation scores, G= [gij]. Wetake the maximum over the question-token (second) dimension and call the columns of the resulting2With the configurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about3.9k seconds for BARB and 8.1k seconds for mLSTM .3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions.Each of these has hidden size12D1.6Under review as a conference paper at ICLR 2017matrix gi2RC. We use this matrix as an annotation over the document word dimension. Contrastingthe multiplicative application of attention vectors, this annotation matrix is to be concatenated to theencoder RNN input in the re-encoding stage.Re-encoding For each document word, the input of the re-encoding RNN (another biGRU network)consists of three components: the document encodings hi, the annotation vectors gi, and a binaryfeature qiindicating whether the document word appears in the question. The resulting vectorsfi= [hi;gi;qi]are fed into the re-encoding RNN to produce D2-dimensional encodings eias inputin the boundary-pointing stage.Boundary pointing Finally, we search for the boundaries of the answer span using a convolutionalnetwork (in a process similar to edge detection). Encodings eiare arranged in matrix E2RD2n.Eis convolved with a bank of nffilters, F`k2RD2w, where wis the filter width, kindexes thedifferent filters, and `indexes the layer of the convolutional network. Each layer has the same numberof filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) followingeach convolution, with the result an (nfn)-dimensional matrix B`.We use two convolutional layers in the boundary-pointing stage. Given B1andB2, the answerspan’s start- and end-location probabilities are computed using p(s)/expvTsB1+bsandp(e)/expvTeB2+be, respectively. We also concatenate p(s)to the input of the second convolutionallayer (along the nf-dimension) so as to condition the end-boundary pointing on the start-boundary.Vectors vs,ve2Rnfand scalars bs,be2Rare trainable parameters.We also provide an intermediate level of “guidance” to the annotation mechanism by first reducingthe feature dimension CinGwith mean-pooling, then maximizing the softmax probabilities in theresulting ( n-dimensional) vector corresponding to the answer word positions in each document. Thisauxiliary task is observed empirically to improve performance.6 E XPERIMENTS46.1 H UMAN EVALUATIONWe tested four English speakers (three native and one near-native) on a total of 1,000 questions fromtheNewsQA development set. As given in Table 3, they averaged 0.694 in F1, which likely representsa ceiling for machine performance. Our students’ exact match (EM) scores are relatively low at 0.465.This is because in many cases there are multiple ways to select semantically equivalent answers, e.g.,“1996” versus “in 1996”. We also compared human performance on the answers that had agreementwith and without validation, finding a difference of only 1.4 percentage points F1. This suggests ourvalidation stage yields good-quality answers.The original SQuAD evaluation of human performance compares separate answers given by crowd-workers; for a closer comparison with NewsQA , we replicated our human test on the same numberof validation data (1,000). We measured their answers against the second group of crowdsourcedresponses in SQuAD ’s development set, as in Rajpurkar et al. (2016). Our students scored 0.807 inF1.6.2 M ODEL PERFORMANCEPerformance of the baseline models and humans is measured by EM and F1 with the official evaluationscript from SQuAD and listed in Table 3. Unless otherwise stated, hyperparameters are determinedbyhyperopt (Appendix A). The gap between human and machine performance on NewsQA isa striking 0.198 points F1 — much larger than the gap on SQuAD (0.098) under the same humanevaluation scheme. The gaps suggest a large margin for improvement with automated methods.Figure 1 stratifies model (BARB) performance according to answer type (left) and reasoning type(right) as defined in Sections 4.1 and 4.2, respectively. The answer-type stratification suggests that4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samplesfor training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerablequestions for future work.7Under review as a conference paper at ICLR 2017Table 3: Performance of several methods and humans on the SQuAD andNewsQA datasets. Su-perscript 1 indicates the results are taken from Rajpurkar et al. (2016), and 2 from Wang & Jiang(2016b).SQuAD Exact Match F1Model Dev Test Dev TestRandom10.11 0.13 0.41 0.43mLSTM20.591 0.595 0.700 0.703BARB 0.591 - 0.709 -Human10.803 0.770 0.905 0.868Human (ours) 0.650 - 0.807 -NewsQA Exact Match F1Model Dev Test Dev TestRandom 0.00 0.00 0.30 0.30mLSTM 0.344 0.349 0.496 0.500BARB 0.361 0.341 0.496 0.482Human 0.465 - 0.694 -Date/timeNumericPersonAdjective PhraseLocationPrepositional PhraseCommon Noun PhraseOtherOther entityClause PhraseVerb Phrase00.20.40.60.8F1EMWord MatchingParaphrasingInferenceSynthesisAmbiguous/ Insufficient0.0000.1500.3000.4500.6000.7500.900NewsQASQuADFigure 1: Left: BARB performance (F1 and EM) stratified by answer type on the full developmentset of NewsQA .Right : BARB performance (F1) stratified by reasoning type on the human-assessedsubset on both NewsQA andSQuAD . Error bars indicate performance differences between BARBand human annotators.the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring inference andsynthesis are,not surprisingly, more difficult for the model. Consistent with observations in Table 3, stratifiedperformance on NewsQA is significantly lower than on SQuAD . The difference is smallest on wordmatching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizinginformation from separate sentences more difficult, since the relevant sentences may be farther apart.This requires the model to track longer-term dependencies.6.3 S ENTENCE -LEVEL SCORINGWe propose a simple sentence-level subtask as an additional quantitative demonstration of the relativedifficulty of NewsQA . Given a document and a question, the goal is to find the sentence containingthe answer span. We hypothesize that simple techniques like word-matching are inadequate to thistask owing to the more involved reasoning required by NewsQA .We employ a technique that resembles inverse document frequency ( idf), which we call inversesentence frequency ( isf). Given a sentence Sifrom an article and its corresponding question Q, theisfscore is given by the sum of the idfscores of the words common to SiandQ(each sentence istreated as a document for the idfcomputation). The sentence with the highest isfis taken as theanswer sentenceS, that is,S= arg maxiXw2Si\Qisf(w):Theisfmethod achieves an impressive 79.4% sentence-level accuracy on SQuAD ’s development setbut only 35.4% accuracy on NewsQA ’s development set, highlighting the comparative difficulty of thelatter. To eliminate the difference in article length as a possible cause of the performance difference,we also artificially increased the article lengths in SQuAD by concatenating adjacent SQuAD articlesfrom the same Wikipedia document. Accuracy decreases as expected with the increased SQuADarticle length, yet remains significantly higher than that on NewsQA with comparable or even largerarticle length (Table 4).8Under review as a conference paper at ICLR 2017Table 4: Sentence-level accuracy on artificially-lengthened SQuAD documents.SQuAD NewsQA# documents 1 3 5 7 9 1Avg # sentences 4.9 14.3 23.2 31.8 40.3 30.7isf 79.6 74.9 73.0 72.3 71.0 35.47 C ONCLUSIONWe have introduced a challenging new comprehension dataset: NewsQA . We collected the 100,000+examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights,posed questions about them, and determined answers. Our methodology yields diverse answer typesand a significant proportion of questions that require some reasoning ability to solve. This makesthe corpus challenging, as confirmed by the large performance gap between humans and deep neuralmodels (0.198 in F1). By its size and complexity, NewsQA makes a significant extension to theexisting body of comprehension datasets. We hope that our corpus will spur further advances inmachine comprehension and guide the development of literate artificial intelligence.ACKNOWLEDGMENTSThe authors would like to thank Ça ̆glar Gülçehre, Sandeep Subramanian and Saizheng Zhang forhelpful discussions, and Pranav Subramani for the graphs.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. ICLR , 2015.Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset forreading comprehension. arXiv preprint arXiv:1610.00956 , 2016.J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y . Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy ,2010.Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / dailymail reading comprehension task. In Association for Computational Linguistics (ACL) , 2016.François Chollet. keras. https://github.com/fchollet/keras , 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In Aistats , volume 9, pp. 249–256, 2010.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in NeuralInformation Processing Systems , pp. 1684–1692, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. ICLR , 2016.Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. arXiv preprint arXiv:1603.01547 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR , 2015.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neuralnetworks. ICML (3) , 28:1310–1318, 2013.Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pp. 1532–43, 2014.9Under review as a conference paper at ICLR 2017Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions formachine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for theopen-domain machine comprehension of text. In EMNLP , volume 1, pp. 2, 2013.Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailingstructures for machine comprehension. In Proceedings of ACL , 2015.Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamicsof learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention formachine reading. arXiv preprint arXiv:1606.02245 , 2016.Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th AnnualMeeting of the Association for Computational Linguistics , 2016a.Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehensionwith the epireader. In EMNLP , 2016b.Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax,frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers , pp. 700, 2015.Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL , 2016a.Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXivpreprint arXiv:1608.07905 , 2016b.10Under review as a conference paper at ICLR 2017APPENDICESA I MPLEMENTATION DETAILSBoth mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using theTheano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors(Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddingsare not updated during training. Embeddings for out-of-vocabulary words are initialized with zero.For both models, the training objective is to maximize the log likelihood of the boundary pointers.Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAMoptimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB.The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of eachepoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5.Parameter tuning is performed on both models using hyperopt5. For each model, configurationsfor the best observed performance are as follows:mLSTMBoth the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hiddensize of 192. These settings are consistent with those used by Wang & Jiang (2016b).Model parameters are initialized with either the normal distribution ( N(0;0:05)) or the orthogonalinitialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O.In the Match-LSTM layer, Wq,Wp, andWrare initialized with O,bpandware initialized with N,andbis initialized as 1.In the answer-pointing layer, VandWaare initialized with O,baandvare initialized with N, andcis initialized as 1.BARBFor BARB, the following hyperparameters are used on both SQuAD andNewsQA :d= 300 ,D1=128,C= 64 ,D2= 256 ,w= 3, andnf= 128 . Weight matrices in the GRU, the bilinear models, aswell as the boundary decoder ( vsandve) are initialized with O. The filter weights in the boundarydecoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinearbiases are initialized with N, and the boundary decoder biases are initialized with 0.B D ATA COLLECTION USER INTERFACEHere we present the user interfaces used in question sourcing, answer sourcing, and question/answervalidation.5https://github.com/hyperopt/hyperopt11Under review as a conference paper at ICLR 2017Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation.12Under review as a conference paper at ICLR 2017Figure 3: Question sourcing instructions for the crowdworkers.13
H1h6iHzEx
ry3iBFqgl
ICLR.cc/2017/conference/-/paper489/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "Paper Summary: \nThis paper presents a new comprehension dataset called NewsQA dataset, containing 100,000 question-answer pairs from over 10,000 news articles from CNN. The dataset is collected through a four-stage process -- article filtering, question collection, answer collection and answer validation. Examples from the dataset are divided into different types based on answer types and reasoning required to answer questions. Human and machine performances on NewsQA are reported and compared with SQuAD.\n\nPaper Strengths: \n-- I agree that models can benefit from diverse set of datasets. This dataset is collected from news articles, hence might pose different sets of problems from current popular datasets such as SQuAD.\n-- The proposed dataset is sufficiently large for data hungry deep learning models to train. \n-- The inclusion of questions with null answers is a nice property to have.\n-- A good amount of thought has gone into formulating the four-stage data collection process.\n-- The proposed BARB model is performing as good as a published state-of-the-art model, while being much faster. \n\nPaper Weaknesses: \n-- Human evaluation is weak. Two near-native English speakers' performance on 100 examples each can hardly be a representative of the complete dataset. Also, what is the model performance on these 200 examples?\n-- Not that it is necessary for this paper, but to clearly demonstrate that this dataset is harder than SQuAD, the authors should either calculate the human performance the same way as SQuAD or calculate human performances on both NewsQA and SQuAD in some other consistent manner on large enough subsets which are good representatives of the complete datasets. Dataset from other communities such as VQA dataset (Antol et al., ICCV 2015) also use the same method as SQuAD to compute human performance. \n-- Section 3.5 says that 86% of questions have answers agreed upon by atleast 2 workers. Why is this number inconsistent with the 4.5% of questions which have answers without agreement after validation (last line in Section 4.1)?\n-- Is the same article shown to multiple Questioners? If yes, is it ensured that the Questioners asking questions about the same article are not asking the same/similar questions?\n-- Authors mention that they keep the same hyperparameters as SQuAD. What are the accuracies if the hyperparameters are tuned using a validation set from NewsQA?\n-- 500 examples which are labeled for reasoning types do not seem enough to represent the complete dataset. Also, what is the model performance on these 500 examples?\n-- Which model's performance has been shown in Figure 1?\n-- Are the two \"students\" graduate/undergraduate students or researchers?\n-- Test set seems to be very small.\n-- Suggestion: Answer validation step is nice, but maybe the dataset can be released in 2 versions -- one with all the answers collected in 3rd stage (without the validation step), and one in the current format with the validation step. \n\nPreliminary Evaluation: \nThe proposed dataset is a large scale machine comprehension dataset collected from news articles, which in my suggestion, is diverse enough from existing datasets that state-of-the-art models can definitely benefit from it. With a better human evaluation, I think this paper will make a good poster. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
NEWSQA: A MACHINE COMPREHENSION DATASET
["Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman"]
We present NewsQA, a challenging machine comprehension dataset of over 100,000 question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting in spans of text from the corresponding articles. We collect this dataset through a four- stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (25.3% F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at datasets.maluuba.com/NewsQA.
["Natural language processing", "Deep learning"]
https://openreview.net/forum?id=ry3iBFqgl
https://openreview.net/pdf?id=ry3iBFqgl
https://openreview.net/forum?id=ry3iBFqgl&noteId=H1h6iHzEx
Under review as a conference paper at ICLR 2017NEWSQA: A M ACHINE COMPREHENSION DATASETAdam TrischlerTong WangXingdi YuanJustin HarrisAlessandro Sordoni Philip Bachman Kaheer Suleman{adam.trischler, tong.wang, eric.yuan, justin.harris,alessandro.sordoni, phil.bachman, k.suleman}@maluuba.comMaluuba ResearchMontréal, Québec, CanadaABSTRACTWe present NewsQA , a challenging machine comprehension dataset of over 100,000question-answer pairs. Crowdworkers supply questions and answers based on aset of over 10,000 news articles from CNN, with answers consisting in spansof text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. Athorough analysis confirms that NewsQA demands abilities beyond simple wordmatching and recognizing entailment. We measure human performance on thedataset and compare it to several strong neural models. The performance gapbetween humans and machines (0.198 in F1) indicates that significant progress canbe made on NewsQA through future research. The dataset is freely available atdatasets.maluuba.com/NewsQA .1 I NTRODUCTIONAlmost all human knowledge is recorded in the language of text. As such, comprehension of writtenlanguage by machines, at a near-human level, would enable a broad class of artificial intelligenceapplications. In human students we evaluate reading comprehension by posing questions basedon a text passage and then assessing a student’s answers. Such comprehension tests are appealingbecause they are objectively gradable and may measure a range of important abilities, from basicunderstanding to causal reasoning to inference (Richardson et al., 2013). To teach literacy to machines,the research community has taken a similar approach with machine comprehension (MC).Recent years have seen the release of a host of MC datasets. Generally, these consist of (document,question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size,difficulty, and collection methodology; however, as pointed out by Rajpurkar et al. (2016), most sufferfrom one of two shortcomings: those that are designed explicitly to test comprehension (Richardsonet al., 2013) are too small for training data-intensive deep learning models, while those that aresufficiently large for deep learning (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016) aregenerated synthetically, yielding questions that are not posed in natural language and that may nottest comprehension directly (Chen et al., 2016). More recently, Rajpurkar et al. (2016) sought toovercome these deficiencies with their crowdsourced dataset, SQuAD .Here we present a challenging new largescale dataset for machine comprehension: NewsQA .NewsQAcontains 119,633 natural language questions posed by crowdworkers on 12,744 news articles fromCNN. Answers to these questions consist in spans of text within the corresponding article highlightedby a distinct set of crowdworkers. To build NewsQA we utilized a four-stage collection processdesigned to encourage exploratory, curiosity-based questions that reflect human information seeking.CNN articles were chosen as the source material because they have been used in the past (Hermannet al., 2015) and, in our view, machine comprehension systems are particularly suited to high-volume,rapidly changing information sources like news.These three authors contributed equally.1Under review as a conference paper at ICLR 2017As Trischler et al. (2016a), Chen et al. (2016), and others have argued, it is important for datasetsto be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in linewith Richardson et al. (2013), our goal with NewsQA was to construct a corpus of questions thatnecessitates reasoning mechanisms, such as synthesis of information across different parts of anarticle. We designed our collection methodology explicitly to capture such questions.The challenging characteristics of NewsQA that distinguish it from most previous comprehensiontasks are as follows:1. Answers are spans of arbitrary length within an article, rather than single words or entities.2. Some questions have no answer in the corresponding article (the nullspan).3. There are no candidate answers from which to choose.4.Our collection process encourages lexical and syntactic divergence between questions andanswers.5.A significant proportion of questions requires reasoning beyond simple word- and context-matching (as shown in our analysis).In this paper we describe the collection methodology for NewsQA , provide a variety of statistics tocharacterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measurehuman performance and compare it to that of two strong neural-network baselines. Unsurprisingly,humans significantly outperform the models we designed and assessed, achieving an F1 score of0.694 versus 0.496 for the best-performing machine. We hope that this corpus will spur furtheradvances on the challenging task of machine comprehension.2 R ELATED DATASETSNewsQA follows in the tradition of several recent comprehension datasets. These vary in size,difficulty, and collection methodology, and each has its own distinguishing characteristics. We agreewith Bajgar et al. (2016) who have said “models could certainly benefit from as diverse a collectionof datasets as possible.” We discuss this collection below.2.1 MCT ESTMCTest (Richardson et al., 2013) is a crowdsourced collection of 660 elementary-level children’sstories with associated questions and answers. The stories are fictional, to ensure that the answer mustbe found in the text itself, and carefully limited to what a young child can understand. Each questioncomes with a set of 4 candidate answers that range from single words to full explanatory sentences.The questions are designed to require rudimentary reasoning and synthesis of information acrosssentences, making the dataset quite challenging. This is compounded by the dataset’s size, whichlimits the training of expressive statistical models. Nevertheless, recent comprehension models haveperformed well on MCTest (Sachan et al., 2015; Wang et al., 2015), including a highly structuredneural model (Trischler et al., 2016a). These models all rely on access to the small set of candidateanswers, a crutch that NewsQA does not provide.2.2 CNN/D AILY MAILTheCNN/Daily Mail corpus (Hermann et al., 2015) consists of news articles scraped from thoseoutlets with corresponding cloze-style questions. Cloze questions are constructed syntheticallyby deleting a single entity from abstractive summary points that accompany each article (writtenpresumably by human authors). As such, determining the correct answer relies mostly on recognizingtextual entailment between the article and the question. The named entities within an article areidentified and anonymized in a preprocessing step and constitute the set of candidate answers; contrastthis with NewsQA in which answers often include longer phrases and no candidates are given.Because the cloze process is automatic, it is straightforward to collect a significant amount of datato support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answerpairs. However, Chen et al. (2016) demonstrated that the task requires only limited reasoning and, in2Under review as a conference paper at ICLR 2017fact, performance of the strongest models (Kadlec et al., 2016; Trischler et al., 2016b; Sordoni et al.,2016) nearly matches that of humans.2.3 C HILDREN ’SBOOK TESTTheChildren’s Book Test (CBT ) (Hill et al., 2016) was collected using a process similar to that ofCNN/Daily Mail . Text passages are 20-sentence excerpts from children’s books available throughProject Gutenberg; questions are generated by deleting a single word in the next ( i.e., 21st) sentence.Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar ascomprehension is likely necessary for this prediction, but comprehension may be insufficient andother mechanisms may be more important.2.4 B OOK TESTBajgar et al. (2016) convincingly argue that, because existing datasets are not large enough, we haveyet to reach the full capacity of existing comprehension models. As a remedy they present BookTest .This is an extension to the named-entity and common-noun strata of CBT that increases their sizeby over 60 times. Bajgar et al. (2016) demonstrate that training on the augmented dataset yields amodel (Kadlec et al., 2016) that matches human performance on CBT. This is impressive and suggeststhat much is to be gained from more data, but we repeat our concerns about the relevance of storyprediction as a comprehension task. We also wish to encourage more efficient learning from less data.2.5 SQ UADThe comprehension dataset most closely related to NewsQA isSQuAD (Rajpurkar et al., 2016). Itconsists of natural language questions posed by crowdworkers on paragraphs from high-PageRankWikipedia articles. As in NewsQA , each answer consists of a span of text from the related paragraphand no candidates are provided. Despite the effort of manual labelling, SQuAD ’s size is significantand amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.SQuAD is a challenging comprehension task in which humans far outperform machines. Theauthors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a differentmethodology), whereas at the time of the writing, the strongest published model to date achieves only0.700 in F1 (Wang & Jiang, 2016b).3 C OLLECTION METHODOLOGYWe collected NewsQA through a four-stage process: article curation, question sourcing, answersourcing, and validation. We also applied a post-processing step with answer agreement consolidationand span merging to enhance the usability of the dataset.3.1 A RTICLE CURATIONWe retrieve articles from CNN using the script created by Hermann et al. (2015) for CNN/DailyMail . From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover awide range of topics that includes politics, economics, and current events. Articles are partitioned atrandom into a training set (90%), a development set (5%), and a test set (5%).3.2 Q UESTION SOURCINGIt was important to us to collect challenging questions that could not be answered using straightforwardword- or context-matching. Like Richardson et al. (2013) we want to encourage reasoning incomprehension models. We are also interested in questions that, in some sense, model humancuriosity and reflect actual human use-cases of information seeking. Along a similar line, we considerit an important (though as yet overlooked) capacity of a comprehension model to recognize whengiven information is inadequate, so we are also interested in questions that may not have sufficientevidence in the text. Our question sourcing stage was designed to solicit questions of this nature, anddeliberately separated from the answer sourcing stage for the same reason.3Under review as a conference paper at ICLR 2017Questioners (a distinct set of crowdworkers) see only a news article’s headline and its summarypoints (also available from CNN); they do not see the full article itself. They are asked to formulatea question from this incomplete information. This encourages curiosity about the contents of thefull article and prevents questions that are simple reformulations of sentences in the text. It alsoincreases the likelihood of questions whose answers do not exist in the text. We reject questions thathave significant word overlap with the summary points to ensure that crowdworkers do not treat thesummaries as mini-articles, and further discouraged this in the instructions. During collection eachQuestioner is solicited for up to three questions about an article. They are provided with positive andnegative examples to prompt and guide them (detailed instructions are shown in Figure 3).3.3 A NSWER SOURCINGA second set of crowdworkers ( Answerers ) provide answers. Although this separation of questionand answer increases the overall cognitive load, we hypothesized that unburdening Questioners inthis way would encourage more complex questions. Answerers receive a full article along with acrowdsourced question and are tasked with determining the answer. They may also reject the questionas nonsensical, or select the nullanswer if the article contains insufficient information. Answers aresubmitted by clicking on and highlighting words in the article while instructions encourage the setof answer words to consist in a single continuous span (again, we give an example prompt in theAppendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with theaim of achieving agreement between at least two Answerers.3.4 V ALIDATIONCrowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested ormalicious workers). To obtain a dataset of the highest possible quality we use a validation processthat mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, aquestion, and the set of unique answers to that question. We task these workers with choosing thebest answer from the candidate set or rejecting all answers. Each article-question pair is validated byan average of 2.48 crowdworkers. Validation was used on those questions without answer-agreementafter the previous stage, amounting to 43.2% of all questions.3.5 A NSWER MARKING AND CLEANUPAfter validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separatecrowdworkers—either at the initial answer sourcing stage or in the top-answer selection. Thisimproves the dataset’s quality. We choose to include the questions without agreed answers in thecorpus also, but they are specially marked. Such questions could be treated as having the nullanswerand used to train models that are aware of poorly posed questions.As a final cleanup step we combine answer spans that are less than 3 words apart (punctuation isdiscounted). We find that 5.68% of answers consist in multiple spans, while 71.3% of multi-spans arewithin the 3-word threshold. Looking more closely at the data reveals that the multi-span answersoften represent lists. These may present an interesting challenge for comprehension models movingforward.4 D ATA ANALYSISWe provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as amachine comprehension benchmark. The analysis focuses on the types of answers that appear in thedataset and the various forms of reasoning required to solve it.14.1 A NSWER TYPESFollowing Rajpurkar et al. (2016), we categorize answers based on their linguistic type (see Table 1).This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER1Additional statistics are available at http://datasets.maluuba.com/NewsQA/stats .4Under review as a conference paper at ICLR 2017Table 1: The variety of answer types appearing in NewsQA , with proportion statistics and examples.Answer type Example Proportion (%)Date/Time March 12, 2008 2.9Numeric 24.3 million 9.8Person Ludwig van Beethoven 14.8Location Torrance, California 7.8Other Entity Pew Hispanic Center 5.8Common Noun Phrase federal prosecutors 22.2Adjective Phrase 5-hour 1.9Verb Phrase suffered minor damage 1.4Clause Phrase trampling on human rights 18.3Prepositional Phrase in the attack 3.8Other nearly half 11.2tags for answer spans (see Rajpurkar et al. (2016) for more details). From the table we see that themajority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spreadamong the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly,answers in NewsQA are linguistically diverse.The proportions in Table 1 only account for cases when an answer span exists. The complement ofthis set comprises questions with an agreed nullanswer (9.5% of the full corpus) and answers withoutagreement after validation (4.5% of the full corpus).4.2 R EASONING TYPESThe forms of reasoning required to solve NewsQA directly influence the abilities that models willlearn from the dataset. We stratified reasoning types using a variation on the taxonomy presentedby Chen et al. (2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, inascending order of difficulty:1.Word Matching: Important words in the question exactly match words in the immediatecontext of an answer span such that a keyword search algorithm could perform well on thissubset.2.Paraphrasing: A single sentence in the article entails or paraphrases the question. Para-phrase recognition may require synonymy and word knowledge.3.Inference: The answer must be inferred from incomplete information in the article or byrecognizing conceptual overlap. This typically draws on world knowledge.4.Synthesis: The answer can only be inferred by synthesizing information distributed acrossmultiple sentences.5.Ambiguous/Insufficient: The question has no answer or no unique answer in the article.For both NewsQA andSQuAD , we manually labelled 1,000 examples (drawn randomly from therespective development sets) according to these types and compiled the results in Table 2. Someexamples fall into more than one category, in which case we defaulted to the more challengingtype. We can see from the table that word matching, the easiest type, makes up the largest subsetin both datasets (32.7% for NewsQA and 39.8% for SQuAD ). Paraphrasing constitutes a muchlarger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicitencouragement of lexical variety in SQuAD question sourcing. However, NewsQA significantlyoutnumbers SQuAD on the distribution of the more difficult forms of reasoning: synthesis andinference make up 33.9% of the data in contrast to 20.5% in SQuAD .5 B ASELINE MODELSWe test the performance of three comprehension systems on NewsQA : human data analysts andtwo neural models. The first neural model is the match-LSTM (mLSTM) system of Wang & Jiang5Under review as a conference paper at ICLR 2017Table 2: Reasoning mechanisms needed to answer questions. For each we show an example questionwith the sentence that contains the answer span, with words relevant to the reasoning type in bold,and the corresponding proportion in the human-evaluated subset of both NewsQA andSQuAD (1,000samples each).Reasoning ExampleProportion (%)NewsQA SQuADWord Matching Q: When were thefindings published ?S: Both sets of research findings were published Thursday ...32.7 39.8Paraphrasing Q: Who is the struggle between in Rwanda?S: The struggle pits ethnic Tutsis , supported by Rwanda, against ethnic Hutu , backed by Congo.27.0 34.3Inference Q: Who drew inspiration from presidents ?S:Rudy Ruiz says the lives of US presidents can make them positive role models for students.13.2 8.6Synthesis Q: Where isBrittanee Drexel from?S: The mother of a 17-year-old Rochester ,New York high school student ... says she did not give herdaughter permission to go on the trip. Brittanee Marie Drexel ’s mom says...20.7 11.9Ambiguous/Insufficient Q: Whose mother ismoving to the White House?S: ... Barack Obama’s mother-in-law , Marian Robinson, will join the Obamas at the family’s privatequarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned]6.4 5.4(2016b). The second is a model of our own design that is computationally cheaper. We describe thesemodels below but omit the personal details of our analysts. Implementation details of the models aredescribed in Appendix A.5.1 M ATCH -LSTMThere are three stages involved in the mLSTM model. First, LSTM networks encode the documentand question (represented by GloVe word embeddings (Pennington et al., 2014)) as sequences ofhidden states. Second, an mLSTM network (Wang & Jiang, 2016a) compares the document encodingswith the question encodings. This network processes the document sequentially and at each tokenuses an attention mechanism to obtain a weighted vector representation of the question; the weightedcombination is concatenated with the encoding of the current token and fed into a standard LSTM.Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of theanswer span. We refer the reader to Wang & Jiang (2016a;b) for full details. At the time of writing,mLSTM is state-of-the-art on SQuAD (see Table 3) so it is natural to test it further on NewsQA .5.2 T HEBILINEAR ANNOTATION RE-ENCODING BOUNDARY (BARB) M ODELThe match-LSTM is computationally intensive since it computes an attention over the entire questionat each document token in the recurrence. To facilitate faster experimentation with NewsQA wedeveloped a lighter-weight model (BARB) that achieves similar results on SQuAD2. Our modelconsists in four stages:Encoding All words in the document and question are mapped to real-valued vectors using theGloVe embedding matrix W2RjVjd. This yields d1; : : : ;dn2Rdandq1; : : : ;qm2Rd.A bidirectional GRU network (Bahdanau et al., 2015) takes in diand encodes contextual stateshi2RD1for the document. The same encoder is applied to qjto derive contextual states kj2RD1for the question.3Bilinear Annotation Next we compare the document and question encodings using a set of Cbilinear transformations,gij=hTiT[1:C]kj;Tc2RD1D1;gij2RC;which we use to produce an (nmC)-dimensional tensor of annotation scores, G= [gij]. Wetake the maximum over the question-token (second) dimension and call the columns of the resulting2With the configurations for the results reported in Section 6.2, one epoch of training on NewsQA takes about3.9k seconds for BARB and 8.1k seconds for mLSTM .3A bidirectional GRU concatenates the hidden states of two GRU networks running in opposite directions.Each of these has hidden size12D1.6Under review as a conference paper at ICLR 2017matrix gi2RC. We use this matrix as an annotation over the document word dimension. Contrastingthe multiplicative application of attention vectors, this annotation matrix is to be concatenated to theencoder RNN input in the re-encoding stage.Re-encoding For each document word, the input of the re-encoding RNN (another biGRU network)consists of three components: the document encodings hi, the annotation vectors gi, and a binaryfeature qiindicating whether the document word appears in the question. The resulting vectorsfi= [hi;gi;qi]are fed into the re-encoding RNN to produce D2-dimensional encodings eias inputin the boundary-pointing stage.Boundary pointing Finally, we search for the boundaries of the answer span using a convolutionalnetwork (in a process similar to edge detection). Encodings eiare arranged in matrix E2RD2n.Eis convolved with a bank of nffilters, F`k2RD2w, where wis the filter width, kindexes thedifferent filters, and `indexes the layer of the convolutional network. Each layer has the same numberof filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) followingeach convolution, with the result an (nfn)-dimensional matrix B`.We use two convolutional layers in the boundary-pointing stage. Given B1andB2, the answerspan’s start- and end-location probabilities are computed using p(s)/expvTsB1+bsandp(e)/expvTeB2+be, respectively. We also concatenate p(s)to the input of the second convolutionallayer (along the nf-dimension) so as to condition the end-boundary pointing on the start-boundary.Vectors vs,ve2Rnfand scalars bs,be2Rare trainable parameters.We also provide an intermediate level of “guidance” to the annotation mechanism by first reducingthe feature dimension CinGwith mean-pooling, then maximizing the softmax probabilities in theresulting ( n-dimensional) vector corresponding to the answer word positions in each document. Thisauxiliary task is observed empirically to improve performance.6 E XPERIMENTS46.1 H UMAN EVALUATIONWe tested four English speakers (three native and one near-native) on a total of 1,000 questions fromtheNewsQA development set. As given in Table 3, they averaged 0.694 in F1, which likely representsa ceiling for machine performance. Our students’ exact match (EM) scores are relatively low at 0.465.This is because in many cases there are multiple ways to select semantically equivalent answers, e.g.,“1996” versus “in 1996”. We also compared human performance on the answers that had agreementwith and without validation, finding a difference of only 1.4 percentage points F1. This suggests ourvalidation stage yields good-quality answers.The original SQuAD evaluation of human performance compares separate answers given by crowd-workers; for a closer comparison with NewsQA , we replicated our human test on the same numberof validation data (1,000). We measured their answers against the second group of crowdsourcedresponses in SQuAD ’s development set, as in Rajpurkar et al. (2016). Our students scored 0.807 inF1.6.2 M ODEL PERFORMANCEPerformance of the baseline models and humans is measured by EM and F1 with the official evaluationscript from SQuAD and listed in Table 3. Unless otherwise stated, hyperparameters are determinedbyhyperopt (Appendix A). The gap between human and machine performance on NewsQA isa striking 0.198 points F1 — much larger than the gap on SQuAD (0.098) under the same humanevaluation scheme. The gaps suggest a large margin for improvement with automated methods.Figure 1 stratifies model (BARB) performance according to answer type (left) and reasoning type(right) as defined in Sections 4.1 and 4.2, respectively. The answer-type stratification suggests that4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 samplesfor training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerablequestions for future work.7Under review as a conference paper at ICLR 2017Table 3: Performance of several methods and humans on the SQuAD andNewsQA datasets. Su-perscript 1 indicates the results are taken from Rajpurkar et al. (2016), and 2 from Wang & Jiang(2016b).SQuAD Exact Match F1Model Dev Test Dev TestRandom10.11 0.13 0.41 0.43mLSTM20.591 0.595 0.700 0.703BARB 0.591 - 0.709 -Human10.803 0.770 0.905 0.868Human (ours) 0.650 - 0.807 -NewsQA Exact Match F1Model Dev Test Dev TestRandom 0.00 0.00 0.30 0.30mLSTM 0.344 0.349 0.496 0.500BARB 0.361 0.341 0.496 0.482Human 0.465 - 0.694 -Date/timeNumericPersonAdjective PhraseLocationPrepositional PhraseCommon Noun PhraseOtherOther entityClause PhraseVerb Phrase00.20.40.60.8F1EMWord MatchingParaphrasingInferenceSynthesisAmbiguous/ Insufficient0.0000.1500.3000.4500.6000.7500.900NewsQASQuADFigure 1: Left: BARB performance (F1 and EM) stratified by answer type on the full developmentset of NewsQA .Right : BARB performance (F1) stratified by reasoning type on the human-assessedsubset on both NewsQA andSQuAD . Error bars indicate performance differences between BARBand human annotators.the model is better at pointing to named entities compared to other types of answers. The reasoning-type stratification, on the other hand, shows that questions requiring inference andsynthesis are,not surprisingly, more difficult for the model. Consistent with observations in Table 3, stratifiedperformance on NewsQA is significantly lower than on SQuAD . The difference is smallest on wordmatching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizinginformation from separate sentences more difficult, since the relevant sentences may be farther apart.This requires the model to track longer-term dependencies.6.3 S ENTENCE -LEVEL SCORINGWe propose a simple sentence-level subtask as an additional quantitative demonstration of the relativedifficulty of NewsQA . Given a document and a question, the goal is to find the sentence containingthe answer span. We hypothesize that simple techniques like word-matching are inadequate to thistask owing to the more involved reasoning required by NewsQA .We employ a technique that resembles inverse document frequency ( idf), which we call inversesentence frequency ( isf). Given a sentence Sifrom an article and its corresponding question Q, theisfscore is given by the sum of the idfscores of the words common to SiandQ(each sentence istreated as a document for the idfcomputation). The sentence with the highest isfis taken as theanswer sentenceS, that is,S= arg maxiXw2Si\Qisf(w):Theisfmethod achieves an impressive 79.4% sentence-level accuracy on SQuAD ’s development setbut only 35.4% accuracy on NewsQA ’s development set, highlighting the comparative difficulty of thelatter. To eliminate the difference in article length as a possible cause of the performance difference,we also artificially increased the article lengths in SQuAD by concatenating adjacent SQuAD articlesfrom the same Wikipedia document. Accuracy decreases as expected with the increased SQuADarticle length, yet remains significantly higher than that on NewsQA with comparable or even largerarticle length (Table 4).8Under review as a conference paper at ICLR 2017Table 4: Sentence-level accuracy on artificially-lengthened SQuAD documents.SQuAD NewsQA# documents 1 3 5 7 9 1Avg # sentences 4.9 14.3 23.2 31.8 40.3 30.7isf 79.6 74.9 73.0 72.3 71.0 35.47 C ONCLUSIONWe have introduced a challenging new comprehension dataset: NewsQA . We collected the 100,000+examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights,posed questions about them, and determined answers. Our methodology yields diverse answer typesand a significant proportion of questions that require some reasoning ability to solve. This makesthe corpus challenging, as confirmed by the large performance gap between humans and deep neuralmodels (0.198 in F1). By its size and complexity, NewsQA makes a significant extension to theexisting body of comprehension datasets. We hope that our corpus will spur further advances inmachine comprehension and guide the development of literate artificial intelligence.ACKNOWLEDGMENTSThe authors would like to thank Ça ̆glar Gülçehre, Sandeep Subramanian and Saizheng Zhang forhelpful discussions, and Pranav Subramani for the graphs.REFERENCESDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. ICLR , 2015.Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: Booktest dataset forreading comprehension. arXiv preprint arXiv:1610.00956 , 2016.J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y . Bengio. Theano: a CPU and GPU math expression compiler. In In Proc. of SciPy ,2010.Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn / dailymail reading comprehension task. In Association for Computational Linguistics (ACL) , 2016.François Chollet. keras. https://github.com/fchollet/keras , 2015.Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In Aistats , volume 9, pp. 249–256, 2010.Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in NeuralInformation Processing Systems , pp. 1684–1692, 2015.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readingchildren’s books with explicit memory representations. ICLR , 2016.Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with theattention sum reader network. arXiv preprint arXiv:1603.01547 , 2016.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR , 2015.Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neuralnetworks. ICML (3) , 28:1310–1318, 2013.Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wordrepresentation. In EMNLP , volume 14, pp. 1532–43, 2014.9Under review as a conference paper at ICLR 2017Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions formachine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016.Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for theopen-domain machine comprehension of text. In EMNLP , volume 1, pp. 2, 2013.Mrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailingstructures for machine comprehension. In Proceedings of ACL , 2015.Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamicsof learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 , 2013.Alessandro Sordoni, Philip Bachman, and Yoshua Bengio. Iterative alternating neural attention formachine reading. arXiv preprint arXiv:1606.02245 , 2016.Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th AnnualMeeting of the Association for Computational Linguistics , 2016a.Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehensionwith the epireader. In EMNLP , 2016b.Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax,frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers , pp. 700, 2015.Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL , 2016a.Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXivpreprint arXiv:1608.07905 , 2016b.10Under review as a conference paper at ICLR 2017APPENDICESA I MPLEMENTATION DETAILSBoth mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using theTheano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors(Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddingsare not updated during training. Embeddings for out-of-vocabulary words are initialized with zero.For both models, the training objective is to maximize the log likelihood of the boundary pointers.Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAMoptimizer (Kingma & Ba, 2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB.The learning rate is decayed by a factor of 0.7 if validation loss does not decrease at the end of eachepoch. Gradient clipping (Pascanu et al., 2013) is applied with a threshold of 5.Parameter tuning is performed on both models using hyperopt5. For each model, configurationsfor the best observed performance are as follows:mLSTMBoth the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hiddensize of 192. These settings are consistent with those used by Wang & Jiang (2016b).Model parameters are initialized with either the normal distribution ( N(0;0:05)) or the orthogonalinitialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O.In the Match-LSTM layer, Wq,Wp, andWrare initialized with O,bpandware initialized with N,andbis initialized as 1.In the answer-pointing layer, VandWaare initialized with O,baandvare initialized with N, andcis initialized as 1.BARBFor BARB, the following hyperparameters are used on both SQuAD andNewsQA :d= 300 ,D1=128,C= 64 ,D2= 256 ,w= 3, andnf= 128 . Weight matrices in the GRU, the bilinear models, aswell as the boundary decoder ( vsandve) are initialized with O. The filter weights in the boundarydecoder are initialized with glorot_uniform (Glorot & Bengio 2010, default in Keras). The bilinearbiases are initialized with N, and the boundary decoder biases are initialized with 0.B D ATA COLLECTION USER INTERFACEHere we present the user interfaces used in question sourcing, answer sourcing, and question/answervalidation.5https://github.com/hyperopt/hyperopt11Under review as a conference paper at ICLR 2017Figure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation.12Under review as a conference paper at ICLR 2017Figure 3: Question sourcing instructions for the crowdworkers.13
rJhGgCHVe
HJV1zP5xg
ICLR.cc/2017/conference/-/paper363/official/review
{"title": "good problem - but results are somewhat unclear", "rating": "4: Ok but not good enough - rejection", "review": "\nThe paper addresses an important problem - namely on how to improve diversity in responses. It is applaudable that the authors show results on several tasks showing the applicability across different problems. \n\nIn my view there are two weaknesses at this point\n\n1) the improvements (for essentially all tasks) seem rather minor and do not really fit the overall claim of the paper\n\n2) the approach seems quite ad hoc and it unclear to me if this is something that will and should be widely adopted. Having said this the gist of the proposed solution seems interesting but somewhat premature. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
["Ashwin K Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra"]
Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
["Deep learning", "Computer vision", "Natural language processing"]
https://openreview.net/forum?id=HJV1zP5xg
https://openreview.net/pdf?id=HJV1zP5xg
https://openreview.net/forum?id=HJV1zP5xg&noteId=rJhGgCHVe
Under review as a conference paper at ICLR 2017DIVERSE BEAM SEARCH :DECODING DIVERSE SOLUTIONS FROMNEURAL SEQUENCE MODELSAshwin K Vijayakumar1, Michael Cogswell1, Ramprasaath R. Selvaraju1, Qing Sun1Stefan Lee1, David Crandall2& Dhruv Batra1{ashwinkv,cogswell,ram21,sunqing,steflee}@vt.edudjcran@indiana.edu ,dbatra@vt.edu1Department of Electrical and Computer Engineering,Virginia Tech, Blacksburg, V A, USA2School of Informatics and ComputingIndiana University, Bloomington, IN, USAABSTRACTNeural sequence models are widely used to model time-series data. Equally ubiq-uitous is the usage of beam search (BS) as an approximate inference algorithm todecode output sequences from these models. BS explores the search space in agreedy left-right fashion retaining only the top Bcandidates. This tends to resultin sequences that differ only slightly from each other. Producing lists of nearlyidentical sequences is not only computationally wasteful but also typically failsto capture the inherent ambiguity of complex AI tasks. To overcome this prob-lem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes alist of diverse outputs by optimizing a diversity-augmented objective. We observethat our method not only improved diversity but also finds better top 1 solutionsby controlling for the exploration and exploitation of the search space. Moreover,these gains are achieved with minimal computational or memory overhead com-pared to beam search. To demonstrate the broad applicability of our method, wepresent results on image captioning, machine translation, conversation and visualquestion generation using both standard quantitative metrics and qualitative hu-man studies. We find that our method consistently outperforms BS and previouslyproposed techniques for diverse decoding from neural sequence models.1 I NTRODUCTIONIn the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks(LSTMs) or more generally, neural sequence models have become the standard choice for modelingtime-series data for a wide range of applications including speech recognition (Graves et al., 2013),machine translation (Bahdanau et al., 2014), conversation modeling (Vinyals & Le, 2015), imageand video captioning (Vinyals et al., 2015; Venugopalan et al., 2015), and visual question answering(Antol et al., 2015). RNN based sequence generation architectures model the conditional probability,Pr(yjx)of an output sequence y= (y1;:::;yT)given an input x(possibly also a sequence); wherethe output tokens ytare from a finite vocabulary, V.Inference in RNNs. Maximum a Posteriori (MAP) inference for RNNs is the task of finding themost likely output sequence given the input. Since the number of possible sequences grows asjVjT, exact inference is NP-hard – so, approximate inference algorithms like beam search (BS) arecommonly employed. BS is a heuristic graph-search algorithm that maintains the Btop-scoringpartial sequences expanded in a greedy left-to-right fashion. Fig. 1 shows a sample BS search tree.Lack of Diversity in BS. Despite the widespread usage of BS, it has long been understood thatsolutions decoded by BS are generic and lacking in diversity (Finkel et al., 2006; Gimpel et al.,1Under review as a conference paper at ICLR 2017atrainsteamblacklocomotiveistravelingonenginetraintraincomingdownathetrainenginedowntracktraintrackstravelingisthewithneartrackdownthroughtracksawithtraintracksainatracksforestlushaantrainsteamantheisengineoldtrainaancomingtraintrainsteamtrainblacktravelingisenginelocomotivetrainanddownthroughtrainisiswhitetrainaistravelingcomingontracksforestdownthroughtheaBeam SearchDiverse Beam SearchA steam engine train travelling down train tracks. A steam engine train travelling down tracks. A steam engine train travelling through a forest. A steam engine train travelling through a lush green forest. A steam engine train travelling through a lush green countrysideA train on a train track with a sky background. A steam engine travelling down train tracks.A steam engine train travelling through a forest. An old steam engine train travelling down train tracks. An old steam engine train travelling through a forest. A black train is on the tracks in a wooded area. A black train is on the tracks in a rural area. Single engine train rolling down the tracks. A steam locomotive is blowing steam.A locomotive drives along the tracks amongst trees and bushes.An old fashion train with steam coming out of its pipe. A black and red train moving down a train track.An engine is coming down the train track.Ground T ruth CaptionsFigure 1: Comparing image captioning outputs decoded by BS (top) and our method, Diverse Beam Search(middle) – we notice that BS captions are near-duplicates with similar shared paths in the search tree andminor variations in the end. In contrast, DBS captions are significantly diverse and similar to the variability inhuman-generated ground truth captions (bottom).2013; Li et al., 2015; Li & Jurafsky, 2016). Comparing the human (bottom) and BS (top) generatedcaptions shown in Fig. 1 demonstrates this deficiency. While this behavior of BS is disadvantageousfor many reasons, we highlight the three most crucial ones here:i) The production of near-identical beams make BS a computationally wasteful algorithm, withessentially the same computation being repeated for no significant gain in performance.ii) Due to loss-evaluation mismatch (i.e. improvements in posterior-probabilities not necessarilycorresponding to improvements in task-specific metrics), it is common practice to deliberatelythrottle BS to become a poorer optimization algorithm by using reduced beam widths (Vinyalset al., 2015; Karpathy & Fei-Fei, 2015; Ferraro et al., 2016). This treatment of an optimizationalgorithm as a hyperparameter is not only intellectually dissatisfying but also has a significantpractical side-effect – it leads to the decoding of largely bland, generic, and “safe” outputs, e.g.always saying “I don’t know” in conversation models (Kannan et al., 2016).iii) Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AIproblems with significant ambiguity –e.g. there are multiple ways of describing an image orresponding in a conversation that are “correct” and it is important to capture this ambiguity byfinding several diverse plausible hypotheses.Overview and Contributions. To address these shortcomings, we propose Diverse Beam Search(DBS) – a general framework to decode a set of diverse sequences that can be used as an alternativeto BS. At a high level, DBS decodes diverse lists by dividing the given beam budget into groups andenforcing diversity between groups of beams. Drawing from recent work in the probabilistic graph-ical models literature on Diverse M-Best (DivMBest) MAP inference (Batra et al., 2012; Prasadet al., 2014; Kirillov et al., 2015), we optimize an objective that consists of two terms – the sequencelikelihood under the model and a dissimilarity term that encourages beams across groups to differ.This diversity-augmented model score is optimized in a doubly greedy manner – greedily optimizingalong both time (like BS) and groups (like DivMBest).Our primary technical contribution is Diverse Beam Search, a doubly greedy approximate infer-ence algorithm to decode diverse sequences from neural sequence models. We report results onimage captioning, machine translation, conversations and visual question generation to demonstratethe broad applicability of DBS. Results show that DBS produces consistent improvements on bothtask-specific oracle and other diversity-related metrics while maintaining run-time and memory re-quirements similar to BS. We also evaluate human preferences between image captions generated byBS or DBS. Further experiments show that DBS is robust over a wide range of its parameter valuesand is capable of encoding various notions of diversity through different forms of the diversty term.Overall, our algorithm is simple to implement and consistently outperforms BS in a wide rangeof domains without sacrificing efficiency. Our implementation is publicly available at https://github.com/ashwinkalyan/dbs . Additionally, we provide an interactive demonstrationof DBS for image captioning at http://dbs.cloudcv.org .2Under review as a conference paper at ICLR 20172 P RELIMINARIES : DECODING RNN S WITH BEAM SEARCHWe begin with a refresher on BS, before describing our generalization, Diverse Beam Search.For notational convenience, let [n]denote the set of natural numbers from 1tonand let v[n]=[v1;:::;vn]|index the first nelements of a vector v2Rm.The Decoding Problem. RNNs are trained to estimate the likelihood of sequences of tokens from afinite dictionaryVgiven an input x. The RNN updates its internal state and estimates the conditionalprobability distribution over the next output given the input and all previous output tokens. Wedenote the logarithm of this conditional probability distribution over all tokens at time tas(yt) =log Pr(ytjyt1;:::;y 1;x). To avoid notational clutter, we index ()with a single variable yt, butit should be clear that it depends on all previous outputs, y[t1]. We write the logprobabilityof a partial solution ( i.e. the sum of logprobabilities of all tokens decoded so far) as (y[t]) =P2[t](y). The decoding problem is then the task of finding a sequence ythat maximizes (y).As each output is conditioned on all the previous outputs, decoding the optimal length- Tsequence inthis setting can be viewed as MAP inference on a T-order Markov chain with nodes correspondingto output tokens at each time step. Not only does the size of the largest factor in such a graph growasjVjT, but computing these factors also requires repetitively evaluating the sequence model. Thus,approximate algorithms are employed and the most prevalent method is beam search (BS).Beam search is a heuristic search algorithm which stores the top Bhighest scoring partial candidatesat each time step; where Bis known as the beam width . Let us denote the set of Bsolutions heldby BS at the start of time tasY[t1]=fy1;[t1];:::;yB;[t1]g. At each time step, BS considers allpossible single token extensions of these beams given by the set Yt=Y[t1]V and retains the Bhighest scoring extensions. More formally, at each step the beams are updated asY[t]= argmaxy1;[t];:::;yB;[t]2YtXb2[B](yb;[t])s:t:yi;[t]6=yj;[t]8i6=j: (1)The above objective can be trivially maximized by sorting all BjVj members ofYtby their logprobabilities and selecting the top B. This process is repeated until time Tand the most likelysequence is selected by ranking the Bcomplete beams according to their logprobabilities.While this method allows for multiple sequences to be explored in parallel, most completions tend tostem from a single highly valued beam – resulting in outputs that are often only minor perturbationsof a single sequence (and typically only towards the end of the sequences).3 D IVERSE BEAM SEARCH : FORMULATION AND ALGORITHMTo overcome this, we augment the objective in Eq. 1 with a dissimilarity term (Y[t])that measuresthe diversity between candidate sequences, assigning a penalty (Y[t])[c]to each possible sequencecompletionc2V. Jointly optimizing this augmented objective for all Bcandidates at each time stepis intractable as the number of possible solutions grows with jVjB(easily 1060for typical languagemodeling settings). To avoid this, we opt for a greedy procedure that divides the beam budget BintoGgroups and promotes diversity between these groups. The approximation is doubly greedy– across both time and groups – so (Y[t])is constant with respect to other groups and we cansequentially optimize each group using regular BS. We now explain the specifics of our approach.Diverse Beam Search. As joint optimization is intractable, we form Gsmaller groups of beamsand optimize them sequentially. Consider a partition of the set of beams Y[t]intoGsmaller setsYg[t];g2[G]ofB0=B=G beams each (we pick Gto divideB). In the example shown in Fig. 2,B= 6beams are divided into G= 3differently colored groups containing B0= 2beams each.Considering diversity only between groups, reduces the search space at each time step; however,inference remains intractable. To enforce diversity efficiently, we consider a greedy strategy thatsteps each group forward in time sequentially while considering the others fixed. Each group canthen evaluate the diversity term with respect to the fixed extensions of previous groups, returning thesearch space to B0jVj . In the snapshot shown in Fig. 2, the third group is being stepped forwardat time stept= 4and the previous groups have already been completed. With this staggered beam-front, the diversity term of the third group can be computed using these completions. Here we use3Under review as a conference paper at ICLR 2017Group 1 Group 2 Group 3a flock of birds flying overa flock of birds flying inbirds flying over the waterbirds flying over an oceanseveral birds areseveral birds flyModify scores to include diversity:(`the0) +(`birds0;`the0;`an0)[`the0]...(`over0) +(`birds0;`the0;`an0)[`over0]??a flock of birds flying over the oceana flock of birds flying over a beachbirds flying over the water in the sunbirds flying the water near a mountainseveral birds are flying over a body of waterseveral birds flying over a body of watertimetFigure 2: Diverse beam search operates left-to-right through time and top to bottom through groups. Diversitybetween groups is combined with joint logprobabilities, allowing continuations to be found efficiently. Theresulting outputs are more diverse than for standard approaches.hamming diversity, which adds diversity penalty -1 for each appearance of a possible extension wordat the same time step in a previous group – ‘birds’, ‘the’, and ‘an’ in the example – and 0 to all otherpossible completions. We discuss other forms for the diversity function in Section 5.1.As we optimize each group with the previous groups fixed, extending group gat timetamounts toa standard BS using dissimilarity augmented logprobabilities and can be written as:Yg[t]= argmaxyg1;[t];:::;ygB0;[t]2YgtXb2[B0]ygb;[t]+ g1[h=1Yh[t]![ygb;t]; (2)s:t:0;ygi;[t]6=ygj;[t]8i6=jwhereis scalar controlling the strength of the diversity term. The full procedure to obtain diversesequences using our method, Diverse Beam Search (DBS), is presented in Algorithm 1. It consistsof two main steps for each group at each time step –1) augmenting the logprobabilities of each possible extension with the diversity term computedfrom previously advanced groups (Algorithm 1, Line 5) and,2) running one step of a smaller BS with B0beams using the augmented logprobabilities to extendthe current group (Algorithm 1, Line 6).Note that the first group ( g= 1) is not ‘conditioned’ on other groups during optimization, so ourmethod is guaranteed to perform at least as well as a beam search of size B0.Algorithm 1: Diverse Beam Search1Perform a diverse beam search with Ggroups using a beam width of B2fort= 1; ::: T do// perform one step of beam search for first group without diversity3Y1[t] argmax(y11;[t];:::;y1B0;[t])Pb2[B0](y1b;[t])4 forg= 2; ::: G do// augment logprobabilities with diversity penalty5 (ygb;[t]) (ygb;[t]) +(Sg1h=1Yh[t])[ygb;t]b2[B0];ygb;[t]2Ygtand>0// perform one step of beam search for the group6Yg[t] argmaxyg1;[t];:::;ygB0;[t]Pb2[B0](ygb;[t]) s.t.yi;[t]6=yj;[t]8i6=j7Return set of B solutions, Y[T]=SGg=1Yg[T]4 R ELATED WORKDiverse M-Best Lists. The task of generating diverse structured outputs from probabilistic modelshas been studied extensively (Park & Ramanan, 2011; Batra et al., 2012; Kirillov et al., 2015; Prasadet al., 2014). Batra et al. (2012) formalized this task for Markov Random Fields as the DivMBestproblem and presented a greedy approach which solves for outputs iteratively, conditioning on pre-vious solutions to induce diversity. Kirillov et al. (2015) show how these solutions can be found4Under review as a conference paper at ICLR 2017jointly (non-greedily) for certain kinds of energy functions. The techniques developed by Kirillovare not directly applicable to decoding from RNNs, which do not satisfy the assumptions made.Most related to our proposed approach is the work of Gimpel et al. (2013), who applied DivMBestto machine translation using beam search as a black-box inference algorithm. Specifically, in thisapproach, DivMBest knows nothing about the inner-workings of BS and simply makes Bsequentialcalls to BS to generate Bdiverse solutions. This approach is extremely wasteful because BS iscalledBtimes, run from scratch every time, and even though each call to BS produces Bsolutions,only one solution is kept by DivMBest. In contrast, DBS avoids these shortcomings by integratingdiversity within BS such that no beams are discarded . By running multiple beam searches in paralleland at staggered time offsets, we obtain large time savings making our method comparable to asingle run of classical BS. One potential disadvantage of our method w.r.t. Gimpel et al. (2013) isthat sentence-level diversity metrics cannot be incorporated in DBS since no group is complete whendiversity is encouraged. However, as observed empirically by us and Li et al. (2015), initial wordstend to disproportionally impact the diversity of the resultant sequences – suggesting that later wordsmay not be important for diverse inference.Diverse Decoding for RNNs. Efforts have been made by Li et al. (2015) and Li & Jurafsky (2016)to produce diverse decodings from recurrent models for conversation modeling and machine trans-lation. Both of these works propose new heuristics for creating diverse M-Best lists and employmutual information to re-rank lists of sequences. The latter achieves a goal separate from ours,which is simply to re-rank diverse lists.Li & Jurafsky (2016) proposes a BS diversification heuristic that discourages beams from sharingcommon roots, implicitly resulting in diverse lists. Introducing diversity through a modified objec-tive (as in DBS) rather than via a procedural heuristic provides easier generalization to incorporatedifferent notions of diversity and control the exploration-exploitation trade-off as detailed in Section5.1. Furthermore, we find that DBS outperforms the method of Li & Jurafsky (2016).Li et al. (2015) introduced a novel decoding objective that maximizes mutual information betweeninputs and predicted outputs to penalize generic sequences. This operates on a principle orthogo-nal and complementary to DBS and Li & Jurafsky (2016). It works by penalizing utterances thatare generally more frequent (diversity independent of input) rather than penalizing utterances thatare similar to other utterances produced for the same input (diversity conditioned on input). Fur-thermore, the input-independent approach requires training a new language model for the targetlanguage while DBS just requires a diversity function . Combination of these complementarytechniques is left as interesting future work.In other recent work, Wu et al. (2016) modify the beam search objective by introducing length-normalization to favor longer sequences and a coverage penalty that favors sequences that accountfor the complete input sequence. While the coverage term does not generalize to all neural sequencemodels, the length-normalization term can be implemented by modifying the joint- log-probabilityof each sequence. Although the goal of this method is not to produce diverse lists and hence notdirectly comparable, it is a complementary technique that can be used in conjunction with our diversedecoding method.5 E XPERIMENTSIn this section, we evaluate our approach on image captioning, machine translation, conversation andvisual question generation tasks to demonstrate both its effectiveness against baselines and its gen-eral applicability to any inference currently supported by beam search. We also analyze the effectsof DBS parameters, explore human preferences for diversity, and discuss diversity’s importance inexplaining complex images. We first explain the baselines and evaluations used in this paper.Baselines & Metrics. Apart from classical beam search, we compare DBS with the diverse decodingmethod proposed in Li & Jurafsky (2016). We also compare against two other complementarydecoding techniques proposed in Li et al. (2015) and Wu et al. (2016). Note that these two techniquesare not directly comparable with DBS since the goal is not to produce diverse lists. We now providea brief description of the comparisons mentioned:- Li & Jurafsky (2016): modify BS by introducing an intra-sibling rank. For each partial solution,the set ofjVjbeam extensions are sorted and assigned intra-sibling ranks k2[jVj]in order5Under review as a conference paper at ICLR 2017of decreasing log probabilities, t(yt). The log probability of an extension is then reduced inproportion to its rank, and continuations are re-sorted under these modified log probabilities toselect the top B‘diverse’ beam extensions.- Li et al. (2015): train an additional unconditioned target sequence model U(y)and perform BSdecoding on an augmented objective P(yjx)U(y), penalizing input-independent decodings.- Wu et al. (2016) modify the beam-search objective by introducing length-normalization that fa-vors longer sequences. The joint log-probability of completed sequences is divided by a factor,(5 +jyj)=(5 + 1), where2[0;1].We compare to our own implementations of these methods as none are publicly available. Both Li& Jurafsky (2016) and Li et al. (2015) develop and use re-rankers to pick a single solution fromthe generated lists. Since we are interested in evaluating the quality of the generated lists and inisolating the gains due to diverse decoding, we do not implement any re-rankers, simply sorting bylog-probability.We evaluate the performance of the generated lists using the following two metrics:-Oracle Accuracy : Oracle or top kaccuracy w.r.t. some task-specific metric, such as BLEU (Pap-ineni et al., 2002) or SPICE (Anderson et al., 2016), is the maximum value of the metric achievedover a list of kpotential solutions. Oracle accuracy is an upper bound on the performance of anyre-ranking strategy and thus measures the maximum potential of a set of outputs.-Diversity Statistics : We count the number of distinct n-grams present in the list of generatedoutputs. Similar to Li et al. (2015), we divide these counts by the total number of words generatedto bias against long sentences.Simultaneous improvements in both metrics indicate that output sequences have increased diversitywithout sacrificing fluency and correctness with respect to target tasks.5.1 S ENSITIVITY ANALYSIS AND EFFECT OF DIVERSITY FUNCTIONSHere we discuss the impact of the number of groups, strength of diversity , and various forms ofdiversity for language models. Note that the parameters of DBS (and other baselines) were tunedon a held-out validation set for each experiment. The supplement provides further discussion andexperimental details.Number of Groups ( G).SettingG=Ballows for the maximum exploration of the search space,while setting G=1reduces DBS to BS, resulting in increased exploitation of the search-space aroundthe 1-best decoding. Empirically, we find that maximum exploration correlates with improved oracleaccuracy and hence use G=Bto report results unless mentioned otherwise. See the supplement fora comparison and more details.Diversity Strength ( ).The diversity strength specifies the trade-off between the model score anddiversity terms. As expected, we find that a higher value of produces a more diverse list; however,very large values of can overpower model score and result in grammatically incorrect outputs. Wesetvia grid search over a range of values to maximize oracle accuracies achieved on the validationset. We find a wide range of values (0.2 to 0.8) work well for most tasks and datasets.Choice of Diversity Function ( ).In Section 3, we defined ()as a function over a set of partialsolutions that outputs a vector of dissimilarity scores for all possible beam completions. Assumingthat each of the previous groups influences the completion of the current group independently, wecan simplify (Sg1h=1Yh[t])as the sum of each group’s contributions asPg1h=1(Yh[t]). In Section3, we illustrated a simple hamming diversity of this form that penalizes selection of tokens propor-tionally to the number of time it was used in previous groups. However, this factorized diversityterm can take various forms in our model – with hamming diversity being the simplest. For lan-guage models, we study the effect of using cumulative (i.e. considering all past time steps), n-gramand neural embedding based diversity functions. Each of these forms encode differing notions ofdiversity and result in DBS outperforming BS. We find simple hamming distance to be effective andreport results based on this diversity measure unless otherwise specified. More details about theseforms of the diversity term are provided in the supplementary.6Under review as a conference paper at ICLR 20175.2 I MAGE CAPTIONINGDataset and Models. We evaluate on two datasets – COCO (Lin et al., 2014) and PASCAL-50S(Vedantam et al., 2015). We use the public splits as in Karpathy & Fei-Fei (2015) for COCO.PASCAL-50S is used only for testing (with 200 held out images used to tune hyperparameters). Wetrain a captioning model (Vinyals et al., 2015) using the neuraltalk21code repository.Results. Table 1 shows Oracle (top k) SPICE for different values of k. DBS consistently outper-forms BS and Li & Jurafsky (2016) on both datasets. We observe that gains on PASCAL-50S aremore pronounced (7.14% and 4.65% SPICE@20 improvements over BS and Li & Jurafsky (2016))than COCO. This suggests diverse predictions are especially advantageous when there is a mismatchbetween training and testing sets, implying DBS may be better suited for real-world applications.Table 1 also shows the number of distinct n-grams produced by different techniques. Our methodproduces significantly more distinct n-grams (almost 300% increase in the number of 4-grams pro-duced) as compared to BS. We also note that our method tends to produce slightly longer captionscompared on average. Moreover, on the PASCAL-50S test split we observe that DBS finds morelikely top-1 solutions on average – DBS obtains an average maximum logprobability of -6.53 op-posed to -6.91 found by BS of the same beam width. This empirical evidence suggests that usingDBS as a replacement to BS may lead to lower inference approximation error.Table 1: Oracle accuracy and distinct n-grams on COCO and PASCAL-50S datasets for image captioning atB= 20 . While we report SPICE, we observe similar trends in other metrics (reported in supplement).Dataset Method Oracle Accuracy (SPICE) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 4.933 7.046 7.949 8.747 0.12 0.57 1.35 2.50Li & Jurafsky (2016) 5.083 7.248 8.096 8.917 0.15 0.97 2.43 5.31PASCAL-50S DBS 5.357 7.357 8.269 9.293 0.18 1.26 3.67 7.33Wu et al. (2016) 5.301 7.322 8.236 8.832 0.16 1.10 3.16 6.45Li et al. (2015) 5.129 7.175 8.168 8.560 0.13 1.15 3.58 8.42Beam Search 16.278 22.962 25.145 27.343 0.40 1.51 3.25 5.67Li & Jurafsky (2016) 16.351 22.715 25.234 27.591 0.54 2.40 5.69 8.94COCO DBS 16.783 23.081 26.088 28.096 0.56 2.96 7.38 13.44Wu et al. (2016) 16.642 22.643 25.437 27.783 0.54 2.42 6.01 7.08Li et al. (2015) 16.749 23.271 26.104 27.946 0.42 1.37 3.46 6.10Human Studies. To evaluate human preference between captions generated by DBS and BS, weperform a human study via Amazon Mechanical Turk using all 1000 images of PASCAL-50S. Foreach image, both DBS and standard BS captions are shown to 5 different users. They are then asked–“Which of the two robots understands the image better?” In this forced-choice test, DBS captionswere preferred over BS 60% of the time by human annotators.Is diversity always needed? While these results show that diverse outputs are important for systemsthat interact with users, is diversity always beneficial? While images with many objects ( e.g., a parkor a living room) can be described in multiple ways, the same is not true when there are few objects(e.g., a close up of a cat or a selfie). This notion is studied by Ionescu et al. (2016), which definesa “difficulty score”: the human response time for solving a visual search task. On the PASCAL-50S dataset, we observe a positive correlation ( = 0:73) between difficulty scores and humanspreferring DBS to BS. Moreover, while DBS is generally preferred by humans for ‘difficult’ images,both are about equally preferred on ‘easier’ images. Details are provided in the supplement.5.3 M ACHINE TRANSLATIONWe use the WMT’14 dataset containing 4.5M sentences to train our machine translation models.We train stacking LSTM models as detailed in Luong et al. (2015), consisting of 4 layers and 1024-dimensional hidden states. While decoding sentences, we employ the same strategy to replace UNKtokens. We train our models using the publicly available seq2seq-attn2code repository. We re-port results on news-test-2013 andnews-test-2014 and use the news-test-2012 to tune the parametersof DBS. We use sentence level BLEU scores to compute oracle metrics and report distinct n-grams1https://github.com/karpathy/neuraltalk22https://github.com/harvardnlp/seq2seq-attn7Under review as a conference paper at ICLR 2017similar to image captioning. Results are shown in Table 2 and we again find that DBS consistentlyoutperforms all baselines.Table 2: Quantitative results on English-German translation on the newstest-2013 and newstest-2014 datasetscombined (at B= 20 ).Method Oracle Accuracy (BLEU-4) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 20.5 22.4 23.8 24.2 0.04 0.75 2.10 3.23Li & Jurafsky (2016) 20.7 22.6 24.0 24.3 0.04 0.81 2.92 4.61DBS 20.8 22.9 24.4 24.8 0.06 0.95 3.67 5.54Wu et al. (2016) 20.6 22.6 24.3 24.6 0.05 0.88 2.83 4.50Li et al. (2015) 20.7 23.1 24.4 24.6 0.04 0.86 2.76 4.315.4 D IALOG GENERATIONDialog generation is a task that is inherently diverse as there are multiple valid ways to respond to astatement. We train a seq2seq model consisting of LSTMs as in Vinyals & Le (2015) on the CornellMovie Dialogs Corpus (Danescu-Niculescu-Mizil & Lee, 2011) using the neuralconvo3reposi-tory. The training dataset consists of 222,182 conversational exchanges between movie characters.Since automatic evaluation of dialog generation responses is an open research problem with exist-ing metrics being poorly correlated with human judgement (Liu et al., 2016), we show qualitativeresults to demonstrate the effectiveness of DBS. Table 3 compares BS and DBS at B=3.Table 3: Qualitative examples of dialog generation comparing best-first search, BS and DBSPrompt Beam Search Diverse Beam SearchWhy do you want to be the president?I don’t know. I don’t know.I don’t know... Because you know!I don’t know, but I am the best Because, I have another life.It’s such a beautiful day!I know! I know.It’s true. Well...Yeah. That’s not true, honey.What’s your name?Jacob. Jacob.Berger, my man. Berger, darlingBerger, Thomas. My mother used to hum that to me.5.5 V ISUAL QUESTION GENERATIONWe also report results on Visual Question Generation (VQG) (Mostafazadeh et al., 2016), wherea model is trained to produce questions about an image . Generating visually focused questions isinteresting because it requires reasoning about multiple problems that are central to vision – e.g.,object attributes, relationships between objects, and natural language. Furthermore, many questionscould make sense for one image, so it is important that lists of generated questions be diverse.We use the VQA dataset (Antol et al., 2015) to train a model similar to image captioning architec-tures. Instead of captions, the training set now consists of 3 questions per image. Similar to previousresults, using beam search to sample outputs results in similarly worded questions (see Fig. 3) andDBS brings out new details captured by the model. Counting the number of types of questions gen-erated (as defined by Antol et al. (2015)) allows us to measure this diversity. We observe that thenumber of question types generated per image increases from 2:3for BS to 3:7for DBS (atB= 6).6 C ONCLUSIONBeam search is widely a used approximate inference algorithm for decoding sequences from neuralsequence models; however, it suffers from a lack of diversity. Producing multiple highly similarand generic outputs is not only wasteful in terms of computation but also detrimental for tasks with3https://github.com/macournoyer/neuralconvo8Under review as a conference paper at ICLR 2017Input Image Beam Search Diverse Beam SearchWhat sport is this? What color is the man’s shirt?What sport is being played? What is the man holding?What color is the man’s shirt? What is the man wearing on his head?What color is the ball? Is the man wearing a helmetWhat is the man wearing? What is the man in the white shirt doing?What color is the man’s shorts? Is the man in the background wearing a helmet?How many zebras are there? How many zebras are there?How many zebras are in the photo? How many zebras are in the photo?How many zebras are in the picture? What is the zebra doing?How many animals are there? What color is the grass?How many zebras are shown? Is the zebra eating?What is the zebra doing? Is the zebra in the wild?Figure 3: Qualitative results on Visual Question Generation. DBS generates questions that are non-generic andbelong to different question types.inherent ambiguity like many involving language. In this work, we modify Beam Search with adiversity-augmented sequence decoding objective to produce Diverse Beam Search . We develop a‘doubly greedy’ approximate algorithm to minimize this objective and produce diverse sequencedecodings. Our method consistently outperforms beam search and other baselines across all ourexperiments without extra computation ortask-specific overhead . DBS is task-agnostic and can beapplied to any case where BS is used, which we demonstrate in multiple domains. Our implementa-tion available at https://github.com/ashwinkalyan/dbs .REFERENCESPeter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic proposi-tional image caption evaluation. In Proceedings of European Conference on Computer Vision(ECCV) , 2016. 6Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit-nick, and Devi Parikh. VQA: Visual question answering. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 2425–2433, 2015. 1, 8Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. Proceedings of the International Conference on Learning Repre-sentations (ICLR) , 2014. 1Dhruv Batra, Payman Yadollahpour, Abner Guzman-Rivera, and Gregory Shakhnarovich. DiverseM-Best Solutions in Markov Random Fields. In Proceedings of European Conference on Com-puter Vision (ECCV) , 2012. 2, 4Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A newapproach to understanding coordination of linguistic style in dialogs. In Proceedings of the Work-shop on Cognitive Modeling and Computational Linguistics, ACL 2011 , 2011. 8Francis Ferraro, Ishan Mostafazadeh, Nasrinand Misra, Aishwarya Agrawal, Jacob Devlin, RossGirshick, Xiadong He, Pushmeet Kohli, Dhruv Batra, and C Lawrence Zitnick. Visual story-telling. Proceedings of the Conference of the North American Chapter of the Association forComputational Linguistics – Human Language Technologies (NAACL HLT) , 2016. 2Jenny Rose Finkel, Christopher D Manning, and Andrew Y Ng. Solving the problem of cascadingerrors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings ofthe Conference on Empirical Methods in Natural Language Processing (EMNLP) , pp. 618–626,2006. 1K. Gimpel, D. Batra, C. Dyer, and G. Shakhnarovich. A systematic exploration of diversity in ma-chine translation. In Proceedings of the Conference on Empirical Methods in Natural LanguageProcessing (EMNLP) , 2013. 1, 5, 12Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deeprecurrent neural networks. abs/1303.5778, 2013. 19Under review as a conference paper at ICLR 2017Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim Papadopoulos, andVittorio Ferrari. How hard can it be? Estimating the difficulty of visual search in an image. InProceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. 7Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos,Greg Corrado, László Lukács, Marina Ganea, Peter Young, et al. Smart reply: Automated reep-onse suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Dis-covery and Data Mining (KDD) , 2016. 2Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descrip-tions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ,2015. 2, 7Alexander Kirillov, Bogdan Savchynskyy, Dmitrij Schlesinger, Dmitry Vetrov, and Carsten Rother.Inferring m-best diverse labelings in a single one. In Proceedings of IEEE Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2015. 2, 4Jiwei Li and Dan Jurafsky. Mutual information and diverse decoding improve neural machine trans-lation. arXiv preprint arXiv:1601.00372 , 2016. 2, 5, 6, 7, 8, 13, 14Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objec-tive function for neural conversation models. Proceedings of the Conference of the North Amer-ican Chapter of the Association for Computational Linguistics – Human Language Technologies(NAACL HLT) , 2015. 2, 5, 6, 7, 8, 13, 14Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, PiotrDollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context, 2014. 7Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Michael Noseworthy, Laurent Charlin, and JoellePineau. How NOT to evaluate your dialogue system: An empirical study of unsupervised evalua-tion metrics for dialogue response generation. 2016. URL http://arxiv.org/abs/1603.08023 . 8Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 , 2015. 7Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013. 12Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Van-derwende. Generating natural questions about an image. Proceedings of the Annual Meeting onAssociation for Computational Linguistics (ACL) , 2016. 8Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. In Proceedings of the Annual Meeting on Association forComputational Linguistics (ACL) , 2002. 6Dennis Park and Deva Ramanan. N-best maximal decoders for part models. In Proceedings of IEEEInternational Conference on Computer Vision (ICCV) , 2011. 4Adarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diversesubsets in exponentially-large structured item sets. In Advances in Neural Information ProcessingSystems (NIPS) , 2014. 2, 4Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based imagedescription evaluation. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 7Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell,and Kate Saenko. Sequence to sequence-video to text. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 4534–4542, 2015. 110Under review as a conference paper at ICLR 2017Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 , 2015.1, 8Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neuralimage caption generator. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 1, 2, 7Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans-lation system: Bridging the gap between human and machine translation. arXiv preprintarXiv:1609.08144 , 2016. 5, 6, 7, 8, 13, 1411Under review as a conference paper at ICLR 2017APPENDIXSENSIVITY STUDIESNumber of Groups. Fig. 4 presents snapshots of the transition from BS to DBS at B= 6 andG=f1;3;6g. As beam width moves from 1 to G, the exploration of the method increases resultingin more diverse lists.Figure 4: Effect of increasing the number of groups G. The beams that belong to the same group are coloredsimilarly. Recall that diversity is only enforced across groups such that G= 1corresponds to classical BS.Diversity Strength. As noted in Section 5.1, our method is robust to a wide range of values of thediversity strength ( ). Fig. 5a shows a grid search of for image-captioning on the PASCAL-50Sdataset.Choice of Diversity Function. The diversity function can take various forms ranging from sim-ple hamming diversity to neural embedding based diversity. We discuss some forms for languagemodelling below:-Hamming Diversity. This form penalizes the selection of tokens used in previous groupsproportional to the number of times it was selected before.-Cumulative Diversity. Once two sequences have diverged sufficiently, it seems unnecessary andperhaps harmful to restrict that they cannot use the same words at the same time. To encodethis ‘backing-off’ of the diversity penalty we introduce cumulative diversity which keeps acount of identical words used at every time step, indicative of overall dissimilarity. Specifically,(Yh[t])[yg[t]] = expf(P2tPb2B0I[yhb;6=ygb;])=gwhere is a temperature parameter control-ling the strength of the cumulative diversity term and I[]is the indicator function.-n-gram Diversity. The current group is penalized for producing the same n-grams as previousgroups, regardless of alignment in time – similar to Gimpel et al. (2013). This is proportional tothe number of times each n-gram in a candidate occurred in previous groups. Unlike hammingdiversity, n-grams capture higher order structures in the sequences.-Neural-embedding Diversity. While all the previous diversity functions discussed above performexact matches, neural embeddings such as word2vec (Mikolov et al., 2013) can penalize semanti-cally similar words like synonyms. This is incorporated in each of the previous diversity functionsby replacing the hamming similarity with a soft version obtained by computing the cosine simi-larity between word2vec representations. When using with n-gram diversity, the representation ofthe n-gram is obtained by summing the vectors of the constituent words.Each of these various forms encode different notions of diversity. Hamming diversity ensures dif-ferent words are used at different times, but can be circumvented by small changes in sequencealignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment.Neural-embedding based encodings can be seen as a semantic blurring of either the hamming orn-gram metrics, with word2vec representation similarity propagating diversity penalties not only toexact matches but also to close synonyms. Fig. 5b shows the oracle performace of various forms ofthe diversity function described in Section 5.1. We find that using any of the above functions helpoutperform BS in the tasks we examine; hamming diversity achieves the best oracle performancedespite its simplicity.IMAGE CAPTIONING EVALUATIONWhile we report oracle SPICE values in the paper, our method consistently outperforms base-lines and classical BS on other standard metrics such as CIDEr (Table 4), METEOR (Table 5) andROUGE (Table 6). We provide these additional results in this section.12Under review as a conference paper at ICLR 2017(a) Grid search of diversity strength parameter (b) Effect of multiple forms for the diversity functionFigure 5: Fig. 5a shows the results of a grid search of the diversity strength ( ) parameter of DBS on thevalidation split of PASCAL 50S dataset. We observe that it is robust for a wide range of values. Fig. 5bcompares the performance of multiple forms for the diversity function ( ). While naïve diversity performs thebest, other forms are comparable while being better than BS.Table 4: CIDEr Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (CIDEr)@1 @5 @10 @20Beam Search 53.79 83.94 96.70 107.63Li & Jurafsky (2016) 54.61 85.21 99.80 110.64PASCAL-50S DBS 57.82 89.38 103.75 113.43Wu et al. (2016) 47.77 72.12 84.64 105.66Li et al. (2015) 49.80 81.35 96.87 107.37Beam Search 87.27 121.74 133.46 140.98Li & Jurafsky (2016) 91.42 111.33 116.94 119.14COCO DBS 86.88 123.38 135.68 142.88Wu et al. (2016) 87.54 122.06 133.21 139.43Li et al. (2015) 88.18 124.20 138.65 150.06Table 5: METEOR Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (METEOR)@1 @5 @10 @20Beam Search 12.24 16.74 19.14 21.22Li & Jurafsky (2016) 13.52 17.65 19.91 21.76PASCAL-50S DBS 13.71 18.45 20.67 22.83Wu et al. (2016) 13.34 17.20 18.98 21.13Li et al. (2015) 13.04 17.92 19.73 22.32Beam Search 24.81 28.56 30.59 31.87Li & Jurafsky (2016) 24.88 29.10 31.44 33.56COCO DBS 25.04 29.67 33.25 35.42Wu et al. (2016) 24.82 28.92 31.53 34.14Li et al. (2015) 24.93 30.11 32.34 34.88Modified SPICE evaluation. To measure both the quality and the diversity of the generated cap-tions, we compute SPICE-score by comparing the graph union of all the generated hypotheses withthe ground truth scene graph. This measure rewards all the relevant relations decoded as against ora-cle accuracy that compares to relevant relations present only in the top-scoring caption. We observethat DBS outperforms both baselines under this measure with a score of 18.345 as against a score of16.988 (beam search) and 17.452 (Li & Jurafsky, 2016).13Under review as a conference paper at ICLR 2017Table 6: ROUGE Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (ROUGE-L)@1 @5 @10 @20Beam Search 45.23 56.12 59.61 62.04Li & Jurafsky (2016) 46.21 56.17 60.15 62.95PASCAL-50S DBS 46.24 56.90 60.35 63.02Wu et al. (2016) 43.73 52.29 56.49 61.65Li et al. (2015) 44.12 54.67 57.34 60.11Beam Search 52.46 58.43 62.56 65.14Li & Jurafsky (2016) 52.87 59.89 63.45 65.42COCO DBS 53.04 60.89 64.24 67.72Wu et al. (2016) 52.13 58.26 62.89 65.77Li et al. (2015) 53.10 59.32 63.04 66.19HUMAN STUDIESFor image-captioning, we conduct a human preference study between BS and DBS captions asexplained in Section 5. A screen shot of the interface used to collect human preferences for captionsgenerated using DBS and BS is presented in Fig. 6. The lists were shuffled to guard the task frombeing gamed by a turker.Table 7: Frequency table for image difficulty and human preference for DBS captions on PASCAL50S datasetdifficulty score # images % images DBSbin range was preffered 481 50.51%[;+] 409 69.92%+ 110 83.63%As mentioned in Section 5, we observe that difficulty score of an image and human preference forDBS captions are positively correlated. The dataset contains more images that are less difficultyand so, we analyze the correlation by dividing the data into three bins. For each bin, we report the% of images for which DBS captions were preferred after a majority vote ( i.e. at least 3/5 turkersvoted in favor of DBS) in Table 7. At low difficulty scores consisting mostly of iconic images – onemight expect that BS would be preferred more often than chance. However, mismatch between thestatistics of the training and testing data results in a better performance of DBS. Some examples forthis case are provided in Fig. 7. More general qualitative examples are provided in Fig. 8.DISCUSSIONAre longer sentences better? Many recent works propose a scoring or a ranking objective thatdepends on the sequence length. These favor longer sequences, reasoning that they tend to havemore details and resulting in improved accuracies. We measure the correlation between length ofa sequence and its accuracy (here, SPICE) and observe insignificant correlation between SPICEand sequence length. On the PASCAL-50S dataset, we find that BS and DBS have are negativelycorrelated (=0:003and=0:015respectively), while (Li & Jurafsky, 2016) is correlatedpositively (= 0:002). Length is not correlated with performance in this case.Efficient utilization of beam budget. In this experiment, we emperically show that DBS makesefficient use of the beam budget in exploring the search space for better solutions. Fig. 9 shows thevariation of oracle SPICE (@B) with the beam size. At really high beam widths, all decoding tech-niques achieve similar oracle accuracies. However, diverse decoding techniques like DBS achievethe same oracle at much lower beam widths. Hence, DBS not only produces sequence lists that aresignificantly different but also efficiently utilizes the beam budget to decode better solutions.14Under review as a conference paper at ICLR 2017Figure 6: Screen-shot of the interface used to perform human studies15Under review as a conference paper at ICLR 2017Figure 7: For images with low difficulty score, BS captions are preferred to DBS – as show in the first figure.However, we observe that DBS captions perform better when there is a mismatch between the statistics of thetesting and training sets. Interesting captions are colored in blue for readability.16Under review as a conference paper at ICLR 2017Figure 8: For images with a high difficulty score, captions produced by DBS are preferred to BS. Interestingcaptions are colored in blue for readability.17Under review as a conference paper at ICLR 2017(a) Oracle SPICE (@B) vs B (b) Oracle METEOR (@B) vs BFigure 9: As the number of beams increases, all decoding methods tend to achieve about the same oracleaccuracy. However, diverse decoding techniques like DBS utilize the beam budget efficiently achieving higheroracle accuracies at much lower beam budgets.18
HJX-S4mVl
HJV1zP5xg
ICLR.cc/2017/conference/-/paper363/official/review
{"title": "potentially interesting idea but lacking comparisons against other classic search techniques beyond simple beam search", "rating": "6: Marginally above acceptance threshold", "review": "\n\n[ Summary ]\n\nThis paper presents a new modified beam search algorithm that promotes diverse beam candidates. It is a well known problem \u2014with both RNNs and also non-neural language models\u2014 that beam search tends to generate beam candidates that are very similar with each other, which can cause two separate but related problems: (1) search error: beam search may not be able to discover a globally optimal solution as they can easily fall out of the beam early on, (2) simple, common, non-diverse output: the resulting output text tends to be generic and common.\n\nThis paper aims to address the second problem (2) by modifying the search objective function itself so that there is a distinct term that scores diversity among the beam candidates. In other words, the goal of the presented algorithm is not to reduce the search error of the original objective function. In contrast, stack decoding and future cost estimation, common practices in phrase-based SMT, aim to address the search error problem.\n\n[ Merits ]\n\nI think the Diverse Beam Search (DBS) algorithm proposed by the authors has some merits. It may be useful when we cannot rely on traditional beam search on the original objective function either because the trained model is not strong enough, or because of the search error, or because the objective itself does not align with the goal of the application.\n\n[ Weaknesses ]\n\nIt is however not entirely clear how the proposed method compares against more traditional approaches like stack decoding and future cost estimation, on tasks like machine translation, as the authors compare their algorithm mainly against L&J\u2019s diverse LM models and simple beam search.\n\nIn fact, modification to the objective function has been applied even in the neural MT context. For example, see equation (14) in page 12 of the following paper:\n\n\"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation\" (https://arxiv.org/pdf/1609.08144v2.pdf)\n\nwhere the attention coverage term serves a role similar to stack decoding (though unlike stack decoding, the objective term is entirely re-defined, more similarly to DBS proposed in this work), and the length penalty may have an effect that indirectly promotes more informative (thus more likely diverse) responses.\n\nComparison against these existing algorithms would make the proposed work more complete.\n\nAlso, I have a mixed feeling about computing and reporting only *oracle* BLUE, CIDEr, METEOR, etc. Especially given how these oracle scores are very close to each other, and that developing a high performing ranking has not been addressed in this work (and that doing so must be not all that trivial), I\u2019m somewhat skeptical how much of DBS results make a practical difference.\n\n\n\n\n**** [Update after the author responses] ****\n\nThe authors addressed some of my concerns by adding a new baseline comparison against Wu et al. 2016. Thus I will raise my score to 6. \n\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
["Ashwin K Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra"]
Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
["Deep learning", "Computer vision", "Natural language processing"]
https://openreview.net/forum?id=HJV1zP5xg
https://openreview.net/pdf?id=HJV1zP5xg
https://openreview.net/forum?id=HJV1zP5xg&noteId=HJX-S4mVl
Under review as a conference paper at ICLR 2017DIVERSE BEAM SEARCH :DECODING DIVERSE SOLUTIONS FROMNEURAL SEQUENCE MODELSAshwin K Vijayakumar1, Michael Cogswell1, Ramprasaath R. Selvaraju1, Qing Sun1Stefan Lee1, David Crandall2& Dhruv Batra1{ashwinkv,cogswell,ram21,sunqing,steflee}@vt.edudjcran@indiana.edu ,dbatra@vt.edu1Department of Electrical and Computer Engineering,Virginia Tech, Blacksburg, V A, USA2School of Informatics and ComputingIndiana University, Bloomington, IN, USAABSTRACTNeural sequence models are widely used to model time-series data. Equally ubiq-uitous is the usage of beam search (BS) as an approximate inference algorithm todecode output sequences from these models. BS explores the search space in agreedy left-right fashion retaining only the top Bcandidates. This tends to resultin sequences that differ only slightly from each other. Producing lists of nearlyidentical sequences is not only computationally wasteful but also typically failsto capture the inherent ambiguity of complex AI tasks. To overcome this prob-lem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes alist of diverse outputs by optimizing a diversity-augmented objective. We observethat our method not only improved diversity but also finds better top 1 solutionsby controlling for the exploration and exploitation of the search space. Moreover,these gains are achieved with minimal computational or memory overhead com-pared to beam search. To demonstrate the broad applicability of our method, wepresent results on image captioning, machine translation, conversation and visualquestion generation using both standard quantitative metrics and qualitative hu-man studies. We find that our method consistently outperforms BS and previouslyproposed techniques for diverse decoding from neural sequence models.1 I NTRODUCTIONIn the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks(LSTMs) or more generally, neural sequence models have become the standard choice for modelingtime-series data for a wide range of applications including speech recognition (Graves et al., 2013),machine translation (Bahdanau et al., 2014), conversation modeling (Vinyals & Le, 2015), imageand video captioning (Vinyals et al., 2015; Venugopalan et al., 2015), and visual question answering(Antol et al., 2015). RNN based sequence generation architectures model the conditional probability,Pr(yjx)of an output sequence y= (y1;:::;yT)given an input x(possibly also a sequence); wherethe output tokens ytare from a finite vocabulary, V.Inference in RNNs. Maximum a Posteriori (MAP) inference for RNNs is the task of finding themost likely output sequence given the input. Since the number of possible sequences grows asjVjT, exact inference is NP-hard – so, approximate inference algorithms like beam search (BS) arecommonly employed. BS is a heuristic graph-search algorithm that maintains the Btop-scoringpartial sequences expanded in a greedy left-to-right fashion. Fig. 1 shows a sample BS search tree.Lack of Diversity in BS. Despite the widespread usage of BS, it has long been understood thatsolutions decoded by BS are generic and lacking in diversity (Finkel et al., 2006; Gimpel et al.,1Under review as a conference paper at ICLR 2017atrainsteamblacklocomotiveistravelingonenginetraintraincomingdownathetrainenginedowntracktraintrackstravelingisthewithneartrackdownthroughtracksawithtraintracksainatracksforestlushaantrainsteamantheisengineoldtrainaancomingtraintrainsteamtrainblacktravelingisenginelocomotivetrainanddownthroughtrainisiswhitetrainaistravelingcomingontracksforestdownthroughtheaBeam SearchDiverse Beam SearchA steam engine train travelling down train tracks. A steam engine train travelling down tracks. A steam engine train travelling through a forest. A steam engine train travelling through a lush green forest. A steam engine train travelling through a lush green countrysideA train on a train track with a sky background. A steam engine travelling down train tracks.A steam engine train travelling through a forest. An old steam engine train travelling down train tracks. An old steam engine train travelling through a forest. A black train is on the tracks in a wooded area. A black train is on the tracks in a rural area. Single engine train rolling down the tracks. A steam locomotive is blowing steam.A locomotive drives along the tracks amongst trees and bushes.An old fashion train with steam coming out of its pipe. A black and red train moving down a train track.An engine is coming down the train track.Ground T ruth CaptionsFigure 1: Comparing image captioning outputs decoded by BS (top) and our method, Diverse Beam Search(middle) – we notice that BS captions are near-duplicates with similar shared paths in the search tree andminor variations in the end. In contrast, DBS captions are significantly diverse and similar to the variability inhuman-generated ground truth captions (bottom).2013; Li et al., 2015; Li & Jurafsky, 2016). Comparing the human (bottom) and BS (top) generatedcaptions shown in Fig. 1 demonstrates this deficiency. While this behavior of BS is disadvantageousfor many reasons, we highlight the three most crucial ones here:i) The production of near-identical beams make BS a computationally wasteful algorithm, withessentially the same computation being repeated for no significant gain in performance.ii) Due to loss-evaluation mismatch (i.e. improvements in posterior-probabilities not necessarilycorresponding to improvements in task-specific metrics), it is common practice to deliberatelythrottle BS to become a poorer optimization algorithm by using reduced beam widths (Vinyalset al., 2015; Karpathy & Fei-Fei, 2015; Ferraro et al., 2016). This treatment of an optimizationalgorithm as a hyperparameter is not only intellectually dissatisfying but also has a significantpractical side-effect – it leads to the decoding of largely bland, generic, and “safe” outputs, e.g.always saying “I don’t know” in conversation models (Kannan et al., 2016).iii) Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AIproblems with significant ambiguity –e.g. there are multiple ways of describing an image orresponding in a conversation that are “correct” and it is important to capture this ambiguity byfinding several diverse plausible hypotheses.Overview and Contributions. To address these shortcomings, we propose Diverse Beam Search(DBS) – a general framework to decode a set of diverse sequences that can be used as an alternativeto BS. At a high level, DBS decodes diverse lists by dividing the given beam budget into groups andenforcing diversity between groups of beams. Drawing from recent work in the probabilistic graph-ical models literature on Diverse M-Best (DivMBest) MAP inference (Batra et al., 2012; Prasadet al., 2014; Kirillov et al., 2015), we optimize an objective that consists of two terms – the sequencelikelihood under the model and a dissimilarity term that encourages beams across groups to differ.This diversity-augmented model score is optimized in a doubly greedy manner – greedily optimizingalong both time (like BS) and groups (like DivMBest).Our primary technical contribution is Diverse Beam Search, a doubly greedy approximate infer-ence algorithm to decode diverse sequences from neural sequence models. We report results onimage captioning, machine translation, conversations and visual question generation to demonstratethe broad applicability of DBS. Results show that DBS produces consistent improvements on bothtask-specific oracle and other diversity-related metrics while maintaining run-time and memory re-quirements similar to BS. We also evaluate human preferences between image captions generated byBS or DBS. Further experiments show that DBS is robust over a wide range of its parameter valuesand is capable of encoding various notions of diversity through different forms of the diversty term.Overall, our algorithm is simple to implement and consistently outperforms BS in a wide rangeof domains without sacrificing efficiency. Our implementation is publicly available at https://github.com/ashwinkalyan/dbs . Additionally, we provide an interactive demonstrationof DBS for image captioning at http://dbs.cloudcv.org .2Under review as a conference paper at ICLR 20172 P RELIMINARIES : DECODING RNN S WITH BEAM SEARCHWe begin with a refresher on BS, before describing our generalization, Diverse Beam Search.For notational convenience, let [n]denote the set of natural numbers from 1tonand let v[n]=[v1;:::;vn]|index the first nelements of a vector v2Rm.The Decoding Problem. RNNs are trained to estimate the likelihood of sequences of tokens from afinite dictionaryVgiven an input x. The RNN updates its internal state and estimates the conditionalprobability distribution over the next output given the input and all previous output tokens. Wedenote the logarithm of this conditional probability distribution over all tokens at time tas(yt) =log Pr(ytjyt1;:::;y 1;x). To avoid notational clutter, we index ()with a single variable yt, butit should be clear that it depends on all previous outputs, y[t1]. We write the logprobabilityof a partial solution ( i.e. the sum of logprobabilities of all tokens decoded so far) as (y[t]) =P2[t](y). The decoding problem is then the task of finding a sequence ythat maximizes (y).As each output is conditioned on all the previous outputs, decoding the optimal length- Tsequence inthis setting can be viewed as MAP inference on a T-order Markov chain with nodes correspondingto output tokens at each time step. Not only does the size of the largest factor in such a graph growasjVjT, but computing these factors also requires repetitively evaluating the sequence model. Thus,approximate algorithms are employed and the most prevalent method is beam search (BS).Beam search is a heuristic search algorithm which stores the top Bhighest scoring partial candidatesat each time step; where Bis known as the beam width . Let us denote the set of Bsolutions heldby BS at the start of time tasY[t1]=fy1;[t1];:::;yB;[t1]g. At each time step, BS considers allpossible single token extensions of these beams given by the set Yt=Y[t1]V and retains the Bhighest scoring extensions. More formally, at each step the beams are updated asY[t]= argmaxy1;[t];:::;yB;[t]2YtXb2[B](yb;[t])s:t:yi;[t]6=yj;[t]8i6=j: (1)The above objective can be trivially maximized by sorting all BjVj members ofYtby their logprobabilities and selecting the top B. This process is repeated until time Tand the most likelysequence is selected by ranking the Bcomplete beams according to their logprobabilities.While this method allows for multiple sequences to be explored in parallel, most completions tend tostem from a single highly valued beam – resulting in outputs that are often only minor perturbationsof a single sequence (and typically only towards the end of the sequences).3 D IVERSE BEAM SEARCH : FORMULATION AND ALGORITHMTo overcome this, we augment the objective in Eq. 1 with a dissimilarity term (Y[t])that measuresthe diversity between candidate sequences, assigning a penalty (Y[t])[c]to each possible sequencecompletionc2V. Jointly optimizing this augmented objective for all Bcandidates at each time stepis intractable as the number of possible solutions grows with jVjB(easily 1060for typical languagemodeling settings). To avoid this, we opt for a greedy procedure that divides the beam budget BintoGgroups and promotes diversity between these groups. The approximation is doubly greedy– across both time and groups – so (Y[t])is constant with respect to other groups and we cansequentially optimize each group using regular BS. We now explain the specifics of our approach.Diverse Beam Search. As joint optimization is intractable, we form Gsmaller groups of beamsand optimize them sequentially. Consider a partition of the set of beams Y[t]intoGsmaller setsYg[t];g2[G]ofB0=B=G beams each (we pick Gto divideB). In the example shown in Fig. 2,B= 6beams are divided into G= 3differently colored groups containing B0= 2beams each.Considering diversity only between groups, reduces the search space at each time step; however,inference remains intractable. To enforce diversity efficiently, we consider a greedy strategy thatsteps each group forward in time sequentially while considering the others fixed. Each group canthen evaluate the diversity term with respect to the fixed extensions of previous groups, returning thesearch space to B0jVj . In the snapshot shown in Fig. 2, the third group is being stepped forwardat time stept= 4and the previous groups have already been completed. With this staggered beam-front, the diversity term of the third group can be computed using these completions. Here we use3Under review as a conference paper at ICLR 2017Group 1 Group 2 Group 3a flock of birds flying overa flock of birds flying inbirds flying over the waterbirds flying over an oceanseveral birds areseveral birds flyModify scores to include diversity:(`the0) +(`birds0;`the0;`an0)[`the0]...(`over0) +(`birds0;`the0;`an0)[`over0]??a flock of birds flying over the oceana flock of birds flying over a beachbirds flying over the water in the sunbirds flying the water near a mountainseveral birds are flying over a body of waterseveral birds flying over a body of watertimetFigure 2: Diverse beam search operates left-to-right through time and top to bottom through groups. Diversitybetween groups is combined with joint logprobabilities, allowing continuations to be found efficiently. Theresulting outputs are more diverse than for standard approaches.hamming diversity, which adds diversity penalty -1 for each appearance of a possible extension wordat the same time step in a previous group – ‘birds’, ‘the’, and ‘an’ in the example – and 0 to all otherpossible completions. We discuss other forms for the diversity function in Section 5.1.As we optimize each group with the previous groups fixed, extending group gat timetamounts toa standard BS using dissimilarity augmented logprobabilities and can be written as:Yg[t]= argmaxyg1;[t];:::;ygB0;[t]2YgtXb2[B0]ygb;[t]+ g1[h=1Yh[t]![ygb;t]; (2)s:t:0;ygi;[t]6=ygj;[t]8i6=jwhereis scalar controlling the strength of the diversity term. The full procedure to obtain diversesequences using our method, Diverse Beam Search (DBS), is presented in Algorithm 1. It consistsof two main steps for each group at each time step –1) augmenting the logprobabilities of each possible extension with the diversity term computedfrom previously advanced groups (Algorithm 1, Line 5) and,2) running one step of a smaller BS with B0beams using the augmented logprobabilities to extendthe current group (Algorithm 1, Line 6).Note that the first group ( g= 1) is not ‘conditioned’ on other groups during optimization, so ourmethod is guaranteed to perform at least as well as a beam search of size B0.Algorithm 1: Diverse Beam Search1Perform a diverse beam search with Ggroups using a beam width of B2fort= 1; ::: T do// perform one step of beam search for first group without diversity3Y1[t] argmax(y11;[t];:::;y1B0;[t])Pb2[B0](y1b;[t])4 forg= 2; ::: G do// augment logprobabilities with diversity penalty5 (ygb;[t]) (ygb;[t]) +(Sg1h=1Yh[t])[ygb;t]b2[B0];ygb;[t]2Ygtand>0// perform one step of beam search for the group6Yg[t] argmaxyg1;[t];:::;ygB0;[t]Pb2[B0](ygb;[t]) s.t.yi;[t]6=yj;[t]8i6=j7Return set of B solutions, Y[T]=SGg=1Yg[T]4 R ELATED WORKDiverse M-Best Lists. The task of generating diverse structured outputs from probabilistic modelshas been studied extensively (Park & Ramanan, 2011; Batra et al., 2012; Kirillov et al., 2015; Prasadet al., 2014). Batra et al. (2012) formalized this task for Markov Random Fields as the DivMBestproblem and presented a greedy approach which solves for outputs iteratively, conditioning on pre-vious solutions to induce diversity. Kirillov et al. (2015) show how these solutions can be found4Under review as a conference paper at ICLR 2017jointly (non-greedily) for certain kinds of energy functions. The techniques developed by Kirillovare not directly applicable to decoding from RNNs, which do not satisfy the assumptions made.Most related to our proposed approach is the work of Gimpel et al. (2013), who applied DivMBestto machine translation using beam search as a black-box inference algorithm. Specifically, in thisapproach, DivMBest knows nothing about the inner-workings of BS and simply makes Bsequentialcalls to BS to generate Bdiverse solutions. This approach is extremely wasteful because BS iscalledBtimes, run from scratch every time, and even though each call to BS produces Bsolutions,only one solution is kept by DivMBest. In contrast, DBS avoids these shortcomings by integratingdiversity within BS such that no beams are discarded . By running multiple beam searches in paralleland at staggered time offsets, we obtain large time savings making our method comparable to asingle run of classical BS. One potential disadvantage of our method w.r.t. Gimpel et al. (2013) isthat sentence-level diversity metrics cannot be incorporated in DBS since no group is complete whendiversity is encouraged. However, as observed empirically by us and Li et al. (2015), initial wordstend to disproportionally impact the diversity of the resultant sequences – suggesting that later wordsmay not be important for diverse inference.Diverse Decoding for RNNs. Efforts have been made by Li et al. (2015) and Li & Jurafsky (2016)to produce diverse decodings from recurrent models for conversation modeling and machine trans-lation. Both of these works propose new heuristics for creating diverse M-Best lists and employmutual information to re-rank lists of sequences. The latter achieves a goal separate from ours,which is simply to re-rank diverse lists.Li & Jurafsky (2016) proposes a BS diversification heuristic that discourages beams from sharingcommon roots, implicitly resulting in diverse lists. Introducing diversity through a modified objec-tive (as in DBS) rather than via a procedural heuristic provides easier generalization to incorporatedifferent notions of diversity and control the exploration-exploitation trade-off as detailed in Section5.1. Furthermore, we find that DBS outperforms the method of Li & Jurafsky (2016).Li et al. (2015) introduced a novel decoding objective that maximizes mutual information betweeninputs and predicted outputs to penalize generic sequences. This operates on a principle orthogo-nal and complementary to DBS and Li & Jurafsky (2016). It works by penalizing utterances thatare generally more frequent (diversity independent of input) rather than penalizing utterances thatare similar to other utterances produced for the same input (diversity conditioned on input). Fur-thermore, the input-independent approach requires training a new language model for the targetlanguage while DBS just requires a diversity function . Combination of these complementarytechniques is left as interesting future work.In other recent work, Wu et al. (2016) modify the beam search objective by introducing length-normalization to favor longer sequences and a coverage penalty that favors sequences that accountfor the complete input sequence. While the coverage term does not generalize to all neural sequencemodels, the length-normalization term can be implemented by modifying the joint- log-probabilityof each sequence. Although the goal of this method is not to produce diverse lists and hence notdirectly comparable, it is a complementary technique that can be used in conjunction with our diversedecoding method.5 E XPERIMENTSIn this section, we evaluate our approach on image captioning, machine translation, conversation andvisual question generation tasks to demonstrate both its effectiveness against baselines and its gen-eral applicability to any inference currently supported by beam search. We also analyze the effectsof DBS parameters, explore human preferences for diversity, and discuss diversity’s importance inexplaining complex images. We first explain the baselines and evaluations used in this paper.Baselines & Metrics. Apart from classical beam search, we compare DBS with the diverse decodingmethod proposed in Li & Jurafsky (2016). We also compare against two other complementarydecoding techniques proposed in Li et al. (2015) and Wu et al. (2016). Note that these two techniquesare not directly comparable with DBS since the goal is not to produce diverse lists. We now providea brief description of the comparisons mentioned:- Li & Jurafsky (2016): modify BS by introducing an intra-sibling rank. For each partial solution,the set ofjVjbeam extensions are sorted and assigned intra-sibling ranks k2[jVj]in order5Under review as a conference paper at ICLR 2017of decreasing log probabilities, t(yt). The log probability of an extension is then reduced inproportion to its rank, and continuations are re-sorted under these modified log probabilities toselect the top B‘diverse’ beam extensions.- Li et al. (2015): train an additional unconditioned target sequence model U(y)and perform BSdecoding on an augmented objective P(yjx)U(y), penalizing input-independent decodings.- Wu et al. (2016) modify the beam-search objective by introducing length-normalization that fa-vors longer sequences. The joint log-probability of completed sequences is divided by a factor,(5 +jyj)=(5 + 1), where2[0;1].We compare to our own implementations of these methods as none are publicly available. Both Li& Jurafsky (2016) and Li et al. (2015) develop and use re-rankers to pick a single solution fromthe generated lists. Since we are interested in evaluating the quality of the generated lists and inisolating the gains due to diverse decoding, we do not implement any re-rankers, simply sorting bylog-probability.We evaluate the performance of the generated lists using the following two metrics:-Oracle Accuracy : Oracle or top kaccuracy w.r.t. some task-specific metric, such as BLEU (Pap-ineni et al., 2002) or SPICE (Anderson et al., 2016), is the maximum value of the metric achievedover a list of kpotential solutions. Oracle accuracy is an upper bound on the performance of anyre-ranking strategy and thus measures the maximum potential of a set of outputs.-Diversity Statistics : We count the number of distinct n-grams present in the list of generatedoutputs. Similar to Li et al. (2015), we divide these counts by the total number of words generatedto bias against long sentences.Simultaneous improvements in both metrics indicate that output sequences have increased diversitywithout sacrificing fluency and correctness with respect to target tasks.5.1 S ENSITIVITY ANALYSIS AND EFFECT OF DIVERSITY FUNCTIONSHere we discuss the impact of the number of groups, strength of diversity , and various forms ofdiversity for language models. Note that the parameters of DBS (and other baselines) were tunedon a held-out validation set for each experiment. The supplement provides further discussion andexperimental details.Number of Groups ( G).SettingG=Ballows for the maximum exploration of the search space,while setting G=1reduces DBS to BS, resulting in increased exploitation of the search-space aroundthe 1-best decoding. Empirically, we find that maximum exploration correlates with improved oracleaccuracy and hence use G=Bto report results unless mentioned otherwise. See the supplement fora comparison and more details.Diversity Strength ( ).The diversity strength specifies the trade-off between the model score anddiversity terms. As expected, we find that a higher value of produces a more diverse list; however,very large values of can overpower model score and result in grammatically incorrect outputs. Wesetvia grid search over a range of values to maximize oracle accuracies achieved on the validationset. We find a wide range of values (0.2 to 0.8) work well for most tasks and datasets.Choice of Diversity Function ( ).In Section 3, we defined ()as a function over a set of partialsolutions that outputs a vector of dissimilarity scores for all possible beam completions. Assumingthat each of the previous groups influences the completion of the current group independently, wecan simplify (Sg1h=1Yh[t])as the sum of each group’s contributions asPg1h=1(Yh[t]). In Section3, we illustrated a simple hamming diversity of this form that penalizes selection of tokens propor-tionally to the number of time it was used in previous groups. However, this factorized diversityterm can take various forms in our model – with hamming diversity being the simplest. For lan-guage models, we study the effect of using cumulative (i.e. considering all past time steps), n-gramand neural embedding based diversity functions. Each of these forms encode differing notions ofdiversity and result in DBS outperforming BS. We find simple hamming distance to be effective andreport results based on this diversity measure unless otherwise specified. More details about theseforms of the diversity term are provided in the supplementary.6Under review as a conference paper at ICLR 20175.2 I MAGE CAPTIONINGDataset and Models. We evaluate on two datasets – COCO (Lin et al., 2014) and PASCAL-50S(Vedantam et al., 2015). We use the public splits as in Karpathy & Fei-Fei (2015) for COCO.PASCAL-50S is used only for testing (with 200 held out images used to tune hyperparameters). Wetrain a captioning model (Vinyals et al., 2015) using the neuraltalk21code repository.Results. Table 1 shows Oracle (top k) SPICE for different values of k. DBS consistently outper-forms BS and Li & Jurafsky (2016) on both datasets. We observe that gains on PASCAL-50S aremore pronounced (7.14% and 4.65% SPICE@20 improvements over BS and Li & Jurafsky (2016))than COCO. This suggests diverse predictions are especially advantageous when there is a mismatchbetween training and testing sets, implying DBS may be better suited for real-world applications.Table 1 also shows the number of distinct n-grams produced by different techniques. Our methodproduces significantly more distinct n-grams (almost 300% increase in the number of 4-grams pro-duced) as compared to BS. We also note that our method tends to produce slightly longer captionscompared on average. Moreover, on the PASCAL-50S test split we observe that DBS finds morelikely top-1 solutions on average – DBS obtains an average maximum logprobability of -6.53 op-posed to -6.91 found by BS of the same beam width. This empirical evidence suggests that usingDBS as a replacement to BS may lead to lower inference approximation error.Table 1: Oracle accuracy and distinct n-grams on COCO and PASCAL-50S datasets for image captioning atB= 20 . While we report SPICE, we observe similar trends in other metrics (reported in supplement).Dataset Method Oracle Accuracy (SPICE) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 4.933 7.046 7.949 8.747 0.12 0.57 1.35 2.50Li & Jurafsky (2016) 5.083 7.248 8.096 8.917 0.15 0.97 2.43 5.31PASCAL-50S DBS 5.357 7.357 8.269 9.293 0.18 1.26 3.67 7.33Wu et al. (2016) 5.301 7.322 8.236 8.832 0.16 1.10 3.16 6.45Li et al. (2015) 5.129 7.175 8.168 8.560 0.13 1.15 3.58 8.42Beam Search 16.278 22.962 25.145 27.343 0.40 1.51 3.25 5.67Li & Jurafsky (2016) 16.351 22.715 25.234 27.591 0.54 2.40 5.69 8.94COCO DBS 16.783 23.081 26.088 28.096 0.56 2.96 7.38 13.44Wu et al. (2016) 16.642 22.643 25.437 27.783 0.54 2.42 6.01 7.08Li et al. (2015) 16.749 23.271 26.104 27.946 0.42 1.37 3.46 6.10Human Studies. To evaluate human preference between captions generated by DBS and BS, weperform a human study via Amazon Mechanical Turk using all 1000 images of PASCAL-50S. Foreach image, both DBS and standard BS captions are shown to 5 different users. They are then asked–“Which of the two robots understands the image better?” In this forced-choice test, DBS captionswere preferred over BS 60% of the time by human annotators.Is diversity always needed? While these results show that diverse outputs are important for systemsthat interact with users, is diversity always beneficial? While images with many objects ( e.g., a parkor a living room) can be described in multiple ways, the same is not true when there are few objects(e.g., a close up of a cat or a selfie). This notion is studied by Ionescu et al. (2016), which definesa “difficulty score”: the human response time for solving a visual search task. On the PASCAL-50S dataset, we observe a positive correlation ( = 0:73) between difficulty scores and humanspreferring DBS to BS. Moreover, while DBS is generally preferred by humans for ‘difficult’ images,both are about equally preferred on ‘easier’ images. Details are provided in the supplement.5.3 M ACHINE TRANSLATIONWe use the WMT’14 dataset containing 4.5M sentences to train our machine translation models.We train stacking LSTM models as detailed in Luong et al. (2015), consisting of 4 layers and 1024-dimensional hidden states. While decoding sentences, we employ the same strategy to replace UNKtokens. We train our models using the publicly available seq2seq-attn2code repository. We re-port results on news-test-2013 andnews-test-2014 and use the news-test-2012 to tune the parametersof DBS. We use sentence level BLEU scores to compute oracle metrics and report distinct n-grams1https://github.com/karpathy/neuraltalk22https://github.com/harvardnlp/seq2seq-attn7Under review as a conference paper at ICLR 2017similar to image captioning. Results are shown in Table 2 and we again find that DBS consistentlyoutperforms all baselines.Table 2: Quantitative results on English-German translation on the newstest-2013 and newstest-2014 datasetscombined (at B= 20 ).Method Oracle Accuracy (BLEU-4) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 20.5 22.4 23.8 24.2 0.04 0.75 2.10 3.23Li & Jurafsky (2016) 20.7 22.6 24.0 24.3 0.04 0.81 2.92 4.61DBS 20.8 22.9 24.4 24.8 0.06 0.95 3.67 5.54Wu et al. (2016) 20.6 22.6 24.3 24.6 0.05 0.88 2.83 4.50Li et al. (2015) 20.7 23.1 24.4 24.6 0.04 0.86 2.76 4.315.4 D IALOG GENERATIONDialog generation is a task that is inherently diverse as there are multiple valid ways to respond to astatement. We train a seq2seq model consisting of LSTMs as in Vinyals & Le (2015) on the CornellMovie Dialogs Corpus (Danescu-Niculescu-Mizil & Lee, 2011) using the neuralconvo3reposi-tory. The training dataset consists of 222,182 conversational exchanges between movie characters.Since automatic evaluation of dialog generation responses is an open research problem with exist-ing metrics being poorly correlated with human judgement (Liu et al., 2016), we show qualitativeresults to demonstrate the effectiveness of DBS. Table 3 compares BS and DBS at B=3.Table 3: Qualitative examples of dialog generation comparing best-first search, BS and DBSPrompt Beam Search Diverse Beam SearchWhy do you want to be the president?I don’t know. I don’t know.I don’t know... Because you know!I don’t know, but I am the best Because, I have another life.It’s such a beautiful day!I know! I know.It’s true. Well...Yeah. That’s not true, honey.What’s your name?Jacob. Jacob.Berger, my man. Berger, darlingBerger, Thomas. My mother used to hum that to me.5.5 V ISUAL QUESTION GENERATIONWe also report results on Visual Question Generation (VQG) (Mostafazadeh et al., 2016), wherea model is trained to produce questions about an image . Generating visually focused questions isinteresting because it requires reasoning about multiple problems that are central to vision – e.g.,object attributes, relationships between objects, and natural language. Furthermore, many questionscould make sense for one image, so it is important that lists of generated questions be diverse.We use the VQA dataset (Antol et al., 2015) to train a model similar to image captioning architec-tures. Instead of captions, the training set now consists of 3 questions per image. Similar to previousresults, using beam search to sample outputs results in similarly worded questions (see Fig. 3) andDBS brings out new details captured by the model. Counting the number of types of questions gen-erated (as defined by Antol et al. (2015)) allows us to measure this diversity. We observe that thenumber of question types generated per image increases from 2:3for BS to 3:7for DBS (atB= 6).6 C ONCLUSIONBeam search is widely a used approximate inference algorithm for decoding sequences from neuralsequence models; however, it suffers from a lack of diversity. Producing multiple highly similarand generic outputs is not only wasteful in terms of computation but also detrimental for tasks with3https://github.com/macournoyer/neuralconvo8Under review as a conference paper at ICLR 2017Input Image Beam Search Diverse Beam SearchWhat sport is this? What color is the man’s shirt?What sport is being played? What is the man holding?What color is the man’s shirt? What is the man wearing on his head?What color is the ball? Is the man wearing a helmetWhat is the man wearing? What is the man in the white shirt doing?What color is the man’s shorts? Is the man in the background wearing a helmet?How many zebras are there? How many zebras are there?How many zebras are in the photo? How many zebras are in the photo?How many zebras are in the picture? What is the zebra doing?How many animals are there? What color is the grass?How many zebras are shown? Is the zebra eating?What is the zebra doing? Is the zebra in the wild?Figure 3: Qualitative results on Visual Question Generation. DBS generates questions that are non-generic andbelong to different question types.inherent ambiguity like many involving language. In this work, we modify Beam Search with adiversity-augmented sequence decoding objective to produce Diverse Beam Search . We develop a‘doubly greedy’ approximate algorithm to minimize this objective and produce diverse sequencedecodings. Our method consistently outperforms beam search and other baselines across all ourexperiments without extra computation ortask-specific overhead . DBS is task-agnostic and can beapplied to any case where BS is used, which we demonstrate in multiple domains. Our implementa-tion available at https://github.com/ashwinkalyan/dbs .REFERENCESPeter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic proposi-tional image caption evaluation. In Proceedings of European Conference on Computer Vision(ECCV) , 2016. 6Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit-nick, and Devi Parikh. VQA: Visual question answering. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 2425–2433, 2015. 1, 8Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. Proceedings of the International Conference on Learning Repre-sentations (ICLR) , 2014. 1Dhruv Batra, Payman Yadollahpour, Abner Guzman-Rivera, and Gregory Shakhnarovich. DiverseM-Best Solutions in Markov Random Fields. In Proceedings of European Conference on Com-puter Vision (ECCV) , 2012. 2, 4Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A newapproach to understanding coordination of linguistic style in dialogs. In Proceedings of the Work-shop on Cognitive Modeling and Computational Linguistics, ACL 2011 , 2011. 8Francis Ferraro, Ishan Mostafazadeh, Nasrinand Misra, Aishwarya Agrawal, Jacob Devlin, RossGirshick, Xiadong He, Pushmeet Kohli, Dhruv Batra, and C Lawrence Zitnick. Visual story-telling. Proceedings of the Conference of the North American Chapter of the Association forComputational Linguistics – Human Language Technologies (NAACL HLT) , 2016. 2Jenny Rose Finkel, Christopher D Manning, and Andrew Y Ng. Solving the problem of cascadingerrors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings ofthe Conference on Empirical Methods in Natural Language Processing (EMNLP) , pp. 618–626,2006. 1K. Gimpel, D. Batra, C. Dyer, and G. Shakhnarovich. A systematic exploration of diversity in ma-chine translation. In Proceedings of the Conference on Empirical Methods in Natural LanguageProcessing (EMNLP) , 2013. 1, 5, 12Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deeprecurrent neural networks. abs/1303.5778, 2013. 19Under review as a conference paper at ICLR 2017Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim Papadopoulos, andVittorio Ferrari. How hard can it be? Estimating the difficulty of visual search in an image. InProceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. 7Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos,Greg Corrado, László Lukács, Marina Ganea, Peter Young, et al. Smart reply: Automated reep-onse suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Dis-covery and Data Mining (KDD) , 2016. 2Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descrip-tions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ,2015. 2, 7Alexander Kirillov, Bogdan Savchynskyy, Dmitrij Schlesinger, Dmitry Vetrov, and Carsten Rother.Inferring m-best diverse labelings in a single one. In Proceedings of IEEE Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2015. 2, 4Jiwei Li and Dan Jurafsky. Mutual information and diverse decoding improve neural machine trans-lation. arXiv preprint arXiv:1601.00372 , 2016. 2, 5, 6, 7, 8, 13, 14Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objec-tive function for neural conversation models. Proceedings of the Conference of the North Amer-ican Chapter of the Association for Computational Linguistics – Human Language Technologies(NAACL HLT) , 2015. 2, 5, 6, 7, 8, 13, 14Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, PiotrDollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context, 2014. 7Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Michael Noseworthy, Laurent Charlin, and JoellePineau. How NOT to evaluate your dialogue system: An empirical study of unsupervised evalua-tion metrics for dialogue response generation. 2016. URL http://arxiv.org/abs/1603.08023 . 8Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 , 2015. 7Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013. 12Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Van-derwende. Generating natural questions about an image. Proceedings of the Annual Meeting onAssociation for Computational Linguistics (ACL) , 2016. 8Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. In Proceedings of the Annual Meeting on Association forComputational Linguistics (ACL) , 2002. 6Dennis Park and Deva Ramanan. N-best maximal decoders for part models. In Proceedings of IEEEInternational Conference on Computer Vision (ICCV) , 2011. 4Adarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diversesubsets in exponentially-large structured item sets. In Advances in Neural Information ProcessingSystems (NIPS) , 2014. 2, 4Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based imagedescription evaluation. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 7Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell,and Kate Saenko. Sequence to sequence-video to text. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 4534–4542, 2015. 110Under review as a conference paper at ICLR 2017Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 , 2015.1, 8Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neuralimage caption generator. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 1, 2, 7Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans-lation system: Bridging the gap between human and machine translation. arXiv preprintarXiv:1609.08144 , 2016. 5, 6, 7, 8, 13, 1411Under review as a conference paper at ICLR 2017APPENDIXSENSIVITY STUDIESNumber of Groups. Fig. 4 presents snapshots of the transition from BS to DBS at B= 6 andG=f1;3;6g. As beam width moves from 1 to G, the exploration of the method increases resultingin more diverse lists.Figure 4: Effect of increasing the number of groups G. The beams that belong to the same group are coloredsimilarly. Recall that diversity is only enforced across groups such that G= 1corresponds to classical BS.Diversity Strength. As noted in Section 5.1, our method is robust to a wide range of values of thediversity strength ( ). Fig. 5a shows a grid search of for image-captioning on the PASCAL-50Sdataset.Choice of Diversity Function. The diversity function can take various forms ranging from sim-ple hamming diversity to neural embedding based diversity. We discuss some forms for languagemodelling below:-Hamming Diversity. This form penalizes the selection of tokens used in previous groupsproportional to the number of times it was selected before.-Cumulative Diversity. Once two sequences have diverged sufficiently, it seems unnecessary andperhaps harmful to restrict that they cannot use the same words at the same time. To encodethis ‘backing-off’ of the diversity penalty we introduce cumulative diversity which keeps acount of identical words used at every time step, indicative of overall dissimilarity. Specifically,(Yh[t])[yg[t]] = expf(P2tPb2B0I[yhb;6=ygb;])=gwhere is a temperature parameter control-ling the strength of the cumulative diversity term and I[]is the indicator function.-n-gram Diversity. The current group is penalized for producing the same n-grams as previousgroups, regardless of alignment in time – similar to Gimpel et al. (2013). This is proportional tothe number of times each n-gram in a candidate occurred in previous groups. Unlike hammingdiversity, n-grams capture higher order structures in the sequences.-Neural-embedding Diversity. While all the previous diversity functions discussed above performexact matches, neural embeddings such as word2vec (Mikolov et al., 2013) can penalize semanti-cally similar words like synonyms. This is incorporated in each of the previous diversity functionsby replacing the hamming similarity with a soft version obtained by computing the cosine simi-larity between word2vec representations. When using with n-gram diversity, the representation ofthe n-gram is obtained by summing the vectors of the constituent words.Each of these various forms encode different notions of diversity. Hamming diversity ensures dif-ferent words are used at different times, but can be circumvented by small changes in sequencealignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment.Neural-embedding based encodings can be seen as a semantic blurring of either the hamming orn-gram metrics, with word2vec representation similarity propagating diversity penalties not only toexact matches but also to close synonyms. Fig. 5b shows the oracle performace of various forms ofthe diversity function described in Section 5.1. We find that using any of the above functions helpoutperform BS in the tasks we examine; hamming diversity achieves the best oracle performancedespite its simplicity.IMAGE CAPTIONING EVALUATIONWhile we report oracle SPICE values in the paper, our method consistently outperforms base-lines and classical BS on other standard metrics such as CIDEr (Table 4), METEOR (Table 5) andROUGE (Table 6). We provide these additional results in this section.12Under review as a conference paper at ICLR 2017(a) Grid search of diversity strength parameter (b) Effect of multiple forms for the diversity functionFigure 5: Fig. 5a shows the results of a grid search of the diversity strength ( ) parameter of DBS on thevalidation split of PASCAL 50S dataset. We observe that it is robust for a wide range of values. Fig. 5bcompares the performance of multiple forms for the diversity function ( ). While naïve diversity performs thebest, other forms are comparable while being better than BS.Table 4: CIDEr Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (CIDEr)@1 @5 @10 @20Beam Search 53.79 83.94 96.70 107.63Li & Jurafsky (2016) 54.61 85.21 99.80 110.64PASCAL-50S DBS 57.82 89.38 103.75 113.43Wu et al. (2016) 47.77 72.12 84.64 105.66Li et al. (2015) 49.80 81.35 96.87 107.37Beam Search 87.27 121.74 133.46 140.98Li & Jurafsky (2016) 91.42 111.33 116.94 119.14COCO DBS 86.88 123.38 135.68 142.88Wu et al. (2016) 87.54 122.06 133.21 139.43Li et al. (2015) 88.18 124.20 138.65 150.06Table 5: METEOR Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (METEOR)@1 @5 @10 @20Beam Search 12.24 16.74 19.14 21.22Li & Jurafsky (2016) 13.52 17.65 19.91 21.76PASCAL-50S DBS 13.71 18.45 20.67 22.83Wu et al. (2016) 13.34 17.20 18.98 21.13Li et al. (2015) 13.04 17.92 19.73 22.32Beam Search 24.81 28.56 30.59 31.87Li & Jurafsky (2016) 24.88 29.10 31.44 33.56COCO DBS 25.04 29.67 33.25 35.42Wu et al. (2016) 24.82 28.92 31.53 34.14Li et al. (2015) 24.93 30.11 32.34 34.88Modified SPICE evaluation. To measure both the quality and the diversity of the generated cap-tions, we compute SPICE-score by comparing the graph union of all the generated hypotheses withthe ground truth scene graph. This measure rewards all the relevant relations decoded as against ora-cle accuracy that compares to relevant relations present only in the top-scoring caption. We observethat DBS outperforms both baselines under this measure with a score of 18.345 as against a score of16.988 (beam search) and 17.452 (Li & Jurafsky, 2016).13Under review as a conference paper at ICLR 2017Table 6: ROUGE Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (ROUGE-L)@1 @5 @10 @20Beam Search 45.23 56.12 59.61 62.04Li & Jurafsky (2016) 46.21 56.17 60.15 62.95PASCAL-50S DBS 46.24 56.90 60.35 63.02Wu et al. (2016) 43.73 52.29 56.49 61.65Li et al. (2015) 44.12 54.67 57.34 60.11Beam Search 52.46 58.43 62.56 65.14Li & Jurafsky (2016) 52.87 59.89 63.45 65.42COCO DBS 53.04 60.89 64.24 67.72Wu et al. (2016) 52.13 58.26 62.89 65.77Li et al. (2015) 53.10 59.32 63.04 66.19HUMAN STUDIESFor image-captioning, we conduct a human preference study between BS and DBS captions asexplained in Section 5. A screen shot of the interface used to collect human preferences for captionsgenerated using DBS and BS is presented in Fig. 6. The lists were shuffled to guard the task frombeing gamed by a turker.Table 7: Frequency table for image difficulty and human preference for DBS captions on PASCAL50S datasetdifficulty score # images % images DBSbin range was preffered 481 50.51%[;+] 409 69.92%+ 110 83.63%As mentioned in Section 5, we observe that difficulty score of an image and human preference forDBS captions are positively correlated. The dataset contains more images that are less difficultyand so, we analyze the correlation by dividing the data into three bins. For each bin, we report the% of images for which DBS captions were preferred after a majority vote ( i.e. at least 3/5 turkersvoted in favor of DBS) in Table 7. At low difficulty scores consisting mostly of iconic images – onemight expect that BS would be preferred more often than chance. However, mismatch between thestatistics of the training and testing data results in a better performance of DBS. Some examples forthis case are provided in Fig. 7. More general qualitative examples are provided in Fig. 8.DISCUSSIONAre longer sentences better? Many recent works propose a scoring or a ranking objective thatdepends on the sequence length. These favor longer sequences, reasoning that they tend to havemore details and resulting in improved accuracies. We measure the correlation between length ofa sequence and its accuracy (here, SPICE) and observe insignificant correlation between SPICEand sequence length. On the PASCAL-50S dataset, we find that BS and DBS have are negativelycorrelated (=0:003and=0:015respectively), while (Li & Jurafsky, 2016) is correlatedpositively (= 0:002). Length is not correlated with performance in this case.Efficient utilization of beam budget. In this experiment, we emperically show that DBS makesefficient use of the beam budget in exploring the search space for better solutions. Fig. 9 shows thevariation of oracle SPICE (@B) with the beam size. At really high beam widths, all decoding tech-niques achieve similar oracle accuracies. However, diverse decoding techniques like DBS achievethe same oracle at much lower beam widths. Hence, DBS not only produces sequence lists that aresignificantly different but also efficiently utilizes the beam budget to decode better solutions.14Under review as a conference paper at ICLR 2017Figure 6: Screen-shot of the interface used to perform human studies15Under review as a conference paper at ICLR 2017Figure 7: For images with low difficulty score, BS captions are preferred to DBS – as show in the first figure.However, we observe that DBS captions perform better when there is a mismatch between the statistics of thetesting and training sets. Interesting captions are colored in blue for readability.16Under review as a conference paper at ICLR 2017Figure 8: For images with a high difficulty score, captions produced by DBS are preferred to BS. Interestingcaptions are colored in blue for readability.17Under review as a conference paper at ICLR 2017(a) Oracle SPICE (@B) vs B (b) Oracle METEOR (@B) vs BFigure 9: As the number of beams increases, all decoding methods tend to achieve about the same oracleaccuracy. However, diverse decoding techniques like DBS utilize the beam budget efficiently achieving higheroracle accuracies at much lower beam budgets.18
ByiSEwxNe
HJV1zP5xg
ICLR.cc/2017/conference/-/paper363/official/review
{"title": "a relatively new problem, but proposed seems to be too simplistic", "rating": "6: Marginally above acceptance threshold", "review": "This paper considers the problem of decoding diverge solutions from neural sequence models. It basically adds an additional term to the log-likelihood of standard neural sequence models, and this additional term will encourage the solutions to be diverse. In addition to solve the inference, this paper uses a modified beam search.\n\nOn the plus side, there is not much work on producing diverse solutions in RNN/LSTM models. This paper represents one of the few works on this topic. And this paper is well-written and easy to follow.\n\nThe novel of this paper is relatively small. There has been a lot of prior work on producing diverse models in the area of probailistic graphical models. Most of them introduce an additional term in the objective function to encourage diversity. From that perspective, the solution proposed in this paper is not that different from previous work. Of course, one can argue that most previous work focues on probabilistic graphical models, while this paper focuses on RNN/LSTM models. But since RNN/LSTM can be simply interpreted as a probabilistic model, I would consider it a small novelty.\n\nThe diverse beam search seems to straightforward, i.e. it partitions the beam search space into groups, and does not consider the diversity within group (in order to reduce the search space). To me, this seems to be a simple trick. Note most previous work on diverse solutions in probabilistic graphical models usually involve developing some nontrivial algorithmic solutions, e.g. in order to achieve efficiency. In comparison, the proposed solution in this paper seems to be simplistic for a paper.\n\nThe experimental results how improvement over previous methods (Li & Jurafsky, 2015, 2016). But it is hard to say how rigorous the comparisons are, since they are based on the authors' own implementation of (Li & Jurasky, 2015, 2016).\n\n---------------\nupdate: given that the authors made the code available (I do hope the code will remain publicly available), this has alleviated some of my concerns about the rigor of the experiments. I will raise my rate to 6.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
["Ashwin K Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra"]
Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions by controlling for the exploration and exploitation of the search space. Moreover, these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual question generation using both standard quantitative metrics and qualitative human studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
["Deep learning", "Computer vision", "Natural language processing"]
https://openreview.net/forum?id=HJV1zP5xg
https://openreview.net/pdf?id=HJV1zP5xg
https://openreview.net/forum?id=HJV1zP5xg&noteId=ByiSEwxNe
Under review as a conference paper at ICLR 2017DIVERSE BEAM SEARCH :DECODING DIVERSE SOLUTIONS FROMNEURAL SEQUENCE MODELSAshwin K Vijayakumar1, Michael Cogswell1, Ramprasaath R. Selvaraju1, Qing Sun1Stefan Lee1, David Crandall2& Dhruv Batra1{ashwinkv,cogswell,ram21,sunqing,steflee}@vt.edudjcran@indiana.edu ,dbatra@vt.edu1Department of Electrical and Computer Engineering,Virginia Tech, Blacksburg, V A, USA2School of Informatics and ComputingIndiana University, Bloomington, IN, USAABSTRACTNeural sequence models are widely used to model time-series data. Equally ubiq-uitous is the usage of beam search (BS) as an approximate inference algorithm todecode output sequences from these models. BS explores the search space in agreedy left-right fashion retaining only the top Bcandidates. This tends to resultin sequences that differ only slightly from each other. Producing lists of nearlyidentical sequences is not only computationally wasteful but also typically failsto capture the inherent ambiguity of complex AI tasks. To overcome this prob-lem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes alist of diverse outputs by optimizing a diversity-augmented objective. We observethat our method not only improved diversity but also finds better top 1 solutionsby controlling for the exploration and exploitation of the search space. Moreover,these gains are achieved with minimal computational or memory overhead com-pared to beam search. To demonstrate the broad applicability of our method, wepresent results on image captioning, machine translation, conversation and visualquestion generation using both standard quantitative metrics and qualitative hu-man studies. We find that our method consistently outperforms BS and previouslyproposed techniques for diverse decoding from neural sequence models.1 I NTRODUCTIONIn the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks(LSTMs) or more generally, neural sequence models have become the standard choice for modelingtime-series data for a wide range of applications including speech recognition (Graves et al., 2013),machine translation (Bahdanau et al., 2014), conversation modeling (Vinyals & Le, 2015), imageand video captioning (Vinyals et al., 2015; Venugopalan et al., 2015), and visual question answering(Antol et al., 2015). RNN based sequence generation architectures model the conditional probability,Pr(yjx)of an output sequence y= (y1;:::;yT)given an input x(possibly also a sequence); wherethe output tokens ytare from a finite vocabulary, V.Inference in RNNs. Maximum a Posteriori (MAP) inference for RNNs is the task of finding themost likely output sequence given the input. Since the number of possible sequences grows asjVjT, exact inference is NP-hard – so, approximate inference algorithms like beam search (BS) arecommonly employed. BS is a heuristic graph-search algorithm that maintains the Btop-scoringpartial sequences expanded in a greedy left-to-right fashion. Fig. 1 shows a sample BS search tree.Lack of Diversity in BS. Despite the widespread usage of BS, it has long been understood thatsolutions decoded by BS are generic and lacking in diversity (Finkel et al., 2006; Gimpel et al.,1Under review as a conference paper at ICLR 2017atrainsteamblacklocomotiveistravelingonenginetraintraincomingdownathetrainenginedowntracktraintrackstravelingisthewithneartrackdownthroughtracksawithtraintracksainatracksforestlushaantrainsteamantheisengineoldtrainaancomingtraintrainsteamtrainblacktravelingisenginelocomotivetrainanddownthroughtrainisiswhitetrainaistravelingcomingontracksforestdownthroughtheaBeam SearchDiverse Beam SearchA steam engine train travelling down train tracks. A steam engine train travelling down tracks. A steam engine train travelling through a forest. A steam engine train travelling through a lush green forest. A steam engine train travelling through a lush green countrysideA train on a train track with a sky background. A steam engine travelling down train tracks.A steam engine train travelling through a forest. An old steam engine train travelling down train tracks. An old steam engine train travelling through a forest. A black train is on the tracks in a wooded area. A black train is on the tracks in a rural area. Single engine train rolling down the tracks. A steam locomotive is blowing steam.A locomotive drives along the tracks amongst trees and bushes.An old fashion train with steam coming out of its pipe. A black and red train moving down a train track.An engine is coming down the train track.Ground T ruth CaptionsFigure 1: Comparing image captioning outputs decoded by BS (top) and our method, Diverse Beam Search(middle) – we notice that BS captions are near-duplicates with similar shared paths in the search tree andminor variations in the end. In contrast, DBS captions are significantly diverse and similar to the variability inhuman-generated ground truth captions (bottom).2013; Li et al., 2015; Li & Jurafsky, 2016). Comparing the human (bottom) and BS (top) generatedcaptions shown in Fig. 1 demonstrates this deficiency. While this behavior of BS is disadvantageousfor many reasons, we highlight the three most crucial ones here:i) The production of near-identical beams make BS a computationally wasteful algorithm, withessentially the same computation being repeated for no significant gain in performance.ii) Due to loss-evaluation mismatch (i.e. improvements in posterior-probabilities not necessarilycorresponding to improvements in task-specific metrics), it is common practice to deliberatelythrottle BS to become a poorer optimization algorithm by using reduced beam widths (Vinyalset al., 2015; Karpathy & Fei-Fei, 2015; Ferraro et al., 2016). This treatment of an optimizationalgorithm as a hyperparameter is not only intellectually dissatisfying but also has a significantpractical side-effect – it leads to the decoding of largely bland, generic, and “safe” outputs, e.g.always saying “I don’t know” in conversation models (Kannan et al., 2016).iii) Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AIproblems with significant ambiguity –e.g. there are multiple ways of describing an image orresponding in a conversation that are “correct” and it is important to capture this ambiguity byfinding several diverse plausible hypotheses.Overview and Contributions. To address these shortcomings, we propose Diverse Beam Search(DBS) – a general framework to decode a set of diverse sequences that can be used as an alternativeto BS. At a high level, DBS decodes diverse lists by dividing the given beam budget into groups andenforcing diversity between groups of beams. Drawing from recent work in the probabilistic graph-ical models literature on Diverse M-Best (DivMBest) MAP inference (Batra et al., 2012; Prasadet al., 2014; Kirillov et al., 2015), we optimize an objective that consists of two terms – the sequencelikelihood under the model and a dissimilarity term that encourages beams across groups to differ.This diversity-augmented model score is optimized in a doubly greedy manner – greedily optimizingalong both time (like BS) and groups (like DivMBest).Our primary technical contribution is Diverse Beam Search, a doubly greedy approximate infer-ence algorithm to decode diverse sequences from neural sequence models. We report results onimage captioning, machine translation, conversations and visual question generation to demonstratethe broad applicability of DBS. Results show that DBS produces consistent improvements on bothtask-specific oracle and other diversity-related metrics while maintaining run-time and memory re-quirements similar to BS. We also evaluate human preferences between image captions generated byBS or DBS. Further experiments show that DBS is robust over a wide range of its parameter valuesand is capable of encoding various notions of diversity through different forms of the diversty term.Overall, our algorithm is simple to implement and consistently outperforms BS in a wide rangeof domains without sacrificing efficiency. Our implementation is publicly available at https://github.com/ashwinkalyan/dbs . Additionally, we provide an interactive demonstrationof DBS for image captioning at http://dbs.cloudcv.org .2Under review as a conference paper at ICLR 20172 P RELIMINARIES : DECODING RNN S WITH BEAM SEARCHWe begin with a refresher on BS, before describing our generalization, Diverse Beam Search.For notational convenience, let [n]denote the set of natural numbers from 1tonand let v[n]=[v1;:::;vn]|index the first nelements of a vector v2Rm.The Decoding Problem. RNNs are trained to estimate the likelihood of sequences of tokens from afinite dictionaryVgiven an input x. The RNN updates its internal state and estimates the conditionalprobability distribution over the next output given the input and all previous output tokens. Wedenote the logarithm of this conditional probability distribution over all tokens at time tas(yt) =log Pr(ytjyt1;:::;y 1;x). To avoid notational clutter, we index ()with a single variable yt, butit should be clear that it depends on all previous outputs, y[t1]. We write the logprobabilityof a partial solution ( i.e. the sum of logprobabilities of all tokens decoded so far) as (y[t]) =P2[t](y). The decoding problem is then the task of finding a sequence ythat maximizes (y).As each output is conditioned on all the previous outputs, decoding the optimal length- Tsequence inthis setting can be viewed as MAP inference on a T-order Markov chain with nodes correspondingto output tokens at each time step. Not only does the size of the largest factor in such a graph growasjVjT, but computing these factors also requires repetitively evaluating the sequence model. Thus,approximate algorithms are employed and the most prevalent method is beam search (BS).Beam search is a heuristic search algorithm which stores the top Bhighest scoring partial candidatesat each time step; where Bis known as the beam width . Let us denote the set of Bsolutions heldby BS at the start of time tasY[t1]=fy1;[t1];:::;yB;[t1]g. At each time step, BS considers allpossible single token extensions of these beams given by the set Yt=Y[t1]V and retains the Bhighest scoring extensions. More formally, at each step the beams are updated asY[t]= argmaxy1;[t];:::;yB;[t]2YtXb2[B](yb;[t])s:t:yi;[t]6=yj;[t]8i6=j: (1)The above objective can be trivially maximized by sorting all BjVj members ofYtby their logprobabilities and selecting the top B. This process is repeated until time Tand the most likelysequence is selected by ranking the Bcomplete beams according to their logprobabilities.While this method allows for multiple sequences to be explored in parallel, most completions tend tostem from a single highly valued beam – resulting in outputs that are often only minor perturbationsof a single sequence (and typically only towards the end of the sequences).3 D IVERSE BEAM SEARCH : FORMULATION AND ALGORITHMTo overcome this, we augment the objective in Eq. 1 with a dissimilarity term (Y[t])that measuresthe diversity between candidate sequences, assigning a penalty (Y[t])[c]to each possible sequencecompletionc2V. Jointly optimizing this augmented objective for all Bcandidates at each time stepis intractable as the number of possible solutions grows with jVjB(easily 1060for typical languagemodeling settings). To avoid this, we opt for a greedy procedure that divides the beam budget BintoGgroups and promotes diversity between these groups. The approximation is doubly greedy– across both time and groups – so (Y[t])is constant with respect to other groups and we cansequentially optimize each group using regular BS. We now explain the specifics of our approach.Diverse Beam Search. As joint optimization is intractable, we form Gsmaller groups of beamsand optimize them sequentially. Consider a partition of the set of beams Y[t]intoGsmaller setsYg[t];g2[G]ofB0=B=G beams each (we pick Gto divideB). In the example shown in Fig. 2,B= 6beams are divided into G= 3differently colored groups containing B0= 2beams each.Considering diversity only between groups, reduces the search space at each time step; however,inference remains intractable. To enforce diversity efficiently, we consider a greedy strategy thatsteps each group forward in time sequentially while considering the others fixed. Each group canthen evaluate the diversity term with respect to the fixed extensions of previous groups, returning thesearch space to B0jVj . In the snapshot shown in Fig. 2, the third group is being stepped forwardat time stept= 4and the previous groups have already been completed. With this staggered beam-front, the diversity term of the third group can be computed using these completions. Here we use3Under review as a conference paper at ICLR 2017Group 1 Group 2 Group 3a flock of birds flying overa flock of birds flying inbirds flying over the waterbirds flying over an oceanseveral birds areseveral birds flyModify scores to include diversity:(`the0) +(`birds0;`the0;`an0)[`the0]...(`over0) +(`birds0;`the0;`an0)[`over0]??a flock of birds flying over the oceana flock of birds flying over a beachbirds flying over the water in the sunbirds flying the water near a mountainseveral birds are flying over a body of waterseveral birds flying over a body of watertimetFigure 2: Diverse beam search operates left-to-right through time and top to bottom through groups. Diversitybetween groups is combined with joint logprobabilities, allowing continuations to be found efficiently. Theresulting outputs are more diverse than for standard approaches.hamming diversity, which adds diversity penalty -1 for each appearance of a possible extension wordat the same time step in a previous group – ‘birds’, ‘the’, and ‘an’ in the example – and 0 to all otherpossible completions. We discuss other forms for the diversity function in Section 5.1.As we optimize each group with the previous groups fixed, extending group gat timetamounts toa standard BS using dissimilarity augmented logprobabilities and can be written as:Yg[t]= argmaxyg1;[t];:::;ygB0;[t]2YgtXb2[B0]ygb;[t]+ g1[h=1Yh[t]![ygb;t]; (2)s:t:0;ygi;[t]6=ygj;[t]8i6=jwhereis scalar controlling the strength of the diversity term. The full procedure to obtain diversesequences using our method, Diverse Beam Search (DBS), is presented in Algorithm 1. It consistsof two main steps for each group at each time step –1) augmenting the logprobabilities of each possible extension with the diversity term computedfrom previously advanced groups (Algorithm 1, Line 5) and,2) running one step of a smaller BS with B0beams using the augmented logprobabilities to extendthe current group (Algorithm 1, Line 6).Note that the first group ( g= 1) is not ‘conditioned’ on other groups during optimization, so ourmethod is guaranteed to perform at least as well as a beam search of size B0.Algorithm 1: Diverse Beam Search1Perform a diverse beam search with Ggroups using a beam width of B2fort= 1; ::: T do// perform one step of beam search for first group without diversity3Y1[t] argmax(y11;[t];:::;y1B0;[t])Pb2[B0](y1b;[t])4 forg= 2; ::: G do// augment logprobabilities with diversity penalty5 (ygb;[t]) (ygb;[t]) +(Sg1h=1Yh[t])[ygb;t]b2[B0];ygb;[t]2Ygtand>0// perform one step of beam search for the group6Yg[t] argmaxyg1;[t];:::;ygB0;[t]Pb2[B0](ygb;[t]) s.t.yi;[t]6=yj;[t]8i6=j7Return set of B solutions, Y[T]=SGg=1Yg[T]4 R ELATED WORKDiverse M-Best Lists. The task of generating diverse structured outputs from probabilistic modelshas been studied extensively (Park & Ramanan, 2011; Batra et al., 2012; Kirillov et al., 2015; Prasadet al., 2014). Batra et al. (2012) formalized this task for Markov Random Fields as the DivMBestproblem and presented a greedy approach which solves for outputs iteratively, conditioning on pre-vious solutions to induce diversity. Kirillov et al. (2015) show how these solutions can be found4Under review as a conference paper at ICLR 2017jointly (non-greedily) for certain kinds of energy functions. The techniques developed by Kirillovare not directly applicable to decoding from RNNs, which do not satisfy the assumptions made.Most related to our proposed approach is the work of Gimpel et al. (2013), who applied DivMBestto machine translation using beam search as a black-box inference algorithm. Specifically, in thisapproach, DivMBest knows nothing about the inner-workings of BS and simply makes Bsequentialcalls to BS to generate Bdiverse solutions. This approach is extremely wasteful because BS iscalledBtimes, run from scratch every time, and even though each call to BS produces Bsolutions,only one solution is kept by DivMBest. In contrast, DBS avoids these shortcomings by integratingdiversity within BS such that no beams are discarded . By running multiple beam searches in paralleland at staggered time offsets, we obtain large time savings making our method comparable to asingle run of classical BS. One potential disadvantage of our method w.r.t. Gimpel et al. (2013) isthat sentence-level diversity metrics cannot be incorporated in DBS since no group is complete whendiversity is encouraged. However, as observed empirically by us and Li et al. (2015), initial wordstend to disproportionally impact the diversity of the resultant sequences – suggesting that later wordsmay not be important for diverse inference.Diverse Decoding for RNNs. Efforts have been made by Li et al. (2015) and Li & Jurafsky (2016)to produce diverse decodings from recurrent models for conversation modeling and machine trans-lation. Both of these works propose new heuristics for creating diverse M-Best lists and employmutual information to re-rank lists of sequences. The latter achieves a goal separate from ours,which is simply to re-rank diverse lists.Li & Jurafsky (2016) proposes a BS diversification heuristic that discourages beams from sharingcommon roots, implicitly resulting in diverse lists. Introducing diversity through a modified objec-tive (as in DBS) rather than via a procedural heuristic provides easier generalization to incorporatedifferent notions of diversity and control the exploration-exploitation trade-off as detailed in Section5.1. Furthermore, we find that DBS outperforms the method of Li & Jurafsky (2016).Li et al. (2015) introduced a novel decoding objective that maximizes mutual information betweeninputs and predicted outputs to penalize generic sequences. This operates on a principle orthogo-nal and complementary to DBS and Li & Jurafsky (2016). It works by penalizing utterances thatare generally more frequent (diversity independent of input) rather than penalizing utterances thatare similar to other utterances produced for the same input (diversity conditioned on input). Fur-thermore, the input-independent approach requires training a new language model for the targetlanguage while DBS just requires a diversity function . Combination of these complementarytechniques is left as interesting future work.In other recent work, Wu et al. (2016) modify the beam search objective by introducing length-normalization to favor longer sequences and a coverage penalty that favors sequences that accountfor the complete input sequence. While the coverage term does not generalize to all neural sequencemodels, the length-normalization term can be implemented by modifying the joint- log-probabilityof each sequence. Although the goal of this method is not to produce diverse lists and hence notdirectly comparable, it is a complementary technique that can be used in conjunction with our diversedecoding method.5 E XPERIMENTSIn this section, we evaluate our approach on image captioning, machine translation, conversation andvisual question generation tasks to demonstrate both its effectiveness against baselines and its gen-eral applicability to any inference currently supported by beam search. We also analyze the effectsof DBS parameters, explore human preferences for diversity, and discuss diversity’s importance inexplaining complex images. We first explain the baselines and evaluations used in this paper.Baselines & Metrics. Apart from classical beam search, we compare DBS with the diverse decodingmethod proposed in Li & Jurafsky (2016). We also compare against two other complementarydecoding techniques proposed in Li et al. (2015) and Wu et al. (2016). Note that these two techniquesare not directly comparable with DBS since the goal is not to produce diverse lists. We now providea brief description of the comparisons mentioned:- Li & Jurafsky (2016): modify BS by introducing an intra-sibling rank. For each partial solution,the set ofjVjbeam extensions are sorted and assigned intra-sibling ranks k2[jVj]in order5Under review as a conference paper at ICLR 2017of decreasing log probabilities, t(yt). The log probability of an extension is then reduced inproportion to its rank, and continuations are re-sorted under these modified log probabilities toselect the top B‘diverse’ beam extensions.- Li et al. (2015): train an additional unconditioned target sequence model U(y)and perform BSdecoding on an augmented objective P(yjx)U(y), penalizing input-independent decodings.- Wu et al. (2016) modify the beam-search objective by introducing length-normalization that fa-vors longer sequences. The joint log-probability of completed sequences is divided by a factor,(5 +jyj)=(5 + 1), where2[0;1].We compare to our own implementations of these methods as none are publicly available. Both Li& Jurafsky (2016) and Li et al. (2015) develop and use re-rankers to pick a single solution fromthe generated lists. Since we are interested in evaluating the quality of the generated lists and inisolating the gains due to diverse decoding, we do not implement any re-rankers, simply sorting bylog-probability.We evaluate the performance of the generated lists using the following two metrics:-Oracle Accuracy : Oracle or top kaccuracy w.r.t. some task-specific metric, such as BLEU (Pap-ineni et al., 2002) or SPICE (Anderson et al., 2016), is the maximum value of the metric achievedover a list of kpotential solutions. Oracle accuracy is an upper bound on the performance of anyre-ranking strategy and thus measures the maximum potential of a set of outputs.-Diversity Statistics : We count the number of distinct n-grams present in the list of generatedoutputs. Similar to Li et al. (2015), we divide these counts by the total number of words generatedto bias against long sentences.Simultaneous improvements in both metrics indicate that output sequences have increased diversitywithout sacrificing fluency and correctness with respect to target tasks.5.1 S ENSITIVITY ANALYSIS AND EFFECT OF DIVERSITY FUNCTIONSHere we discuss the impact of the number of groups, strength of diversity , and various forms ofdiversity for language models. Note that the parameters of DBS (and other baselines) were tunedon a held-out validation set for each experiment. The supplement provides further discussion andexperimental details.Number of Groups ( G).SettingG=Ballows for the maximum exploration of the search space,while setting G=1reduces DBS to BS, resulting in increased exploitation of the search-space aroundthe 1-best decoding. Empirically, we find that maximum exploration correlates with improved oracleaccuracy and hence use G=Bto report results unless mentioned otherwise. See the supplement fora comparison and more details.Diversity Strength ( ).The diversity strength specifies the trade-off between the model score anddiversity terms. As expected, we find that a higher value of produces a more diverse list; however,very large values of can overpower model score and result in grammatically incorrect outputs. Wesetvia grid search over a range of values to maximize oracle accuracies achieved on the validationset. We find a wide range of values (0.2 to 0.8) work well for most tasks and datasets.Choice of Diversity Function ( ).In Section 3, we defined ()as a function over a set of partialsolutions that outputs a vector of dissimilarity scores for all possible beam completions. Assumingthat each of the previous groups influences the completion of the current group independently, wecan simplify (Sg1h=1Yh[t])as the sum of each group’s contributions asPg1h=1(Yh[t]). In Section3, we illustrated a simple hamming diversity of this form that penalizes selection of tokens propor-tionally to the number of time it was used in previous groups. However, this factorized diversityterm can take various forms in our model – with hamming diversity being the simplest. For lan-guage models, we study the effect of using cumulative (i.e. considering all past time steps), n-gramand neural embedding based diversity functions. Each of these forms encode differing notions ofdiversity and result in DBS outperforming BS. We find simple hamming distance to be effective andreport results based on this diversity measure unless otherwise specified. More details about theseforms of the diversity term are provided in the supplementary.6Under review as a conference paper at ICLR 20175.2 I MAGE CAPTIONINGDataset and Models. We evaluate on two datasets – COCO (Lin et al., 2014) and PASCAL-50S(Vedantam et al., 2015). We use the public splits as in Karpathy & Fei-Fei (2015) for COCO.PASCAL-50S is used only for testing (with 200 held out images used to tune hyperparameters). Wetrain a captioning model (Vinyals et al., 2015) using the neuraltalk21code repository.Results. Table 1 shows Oracle (top k) SPICE for different values of k. DBS consistently outper-forms BS and Li & Jurafsky (2016) on both datasets. We observe that gains on PASCAL-50S aremore pronounced (7.14% and 4.65% SPICE@20 improvements over BS and Li & Jurafsky (2016))than COCO. This suggests diverse predictions are especially advantageous when there is a mismatchbetween training and testing sets, implying DBS may be better suited for real-world applications.Table 1 also shows the number of distinct n-grams produced by different techniques. Our methodproduces significantly more distinct n-grams (almost 300% increase in the number of 4-grams pro-duced) as compared to BS. We also note that our method tends to produce slightly longer captionscompared on average. Moreover, on the PASCAL-50S test split we observe that DBS finds morelikely top-1 solutions on average – DBS obtains an average maximum logprobability of -6.53 op-posed to -6.91 found by BS of the same beam width. This empirical evidence suggests that usingDBS as a replacement to BS may lead to lower inference approximation error.Table 1: Oracle accuracy and distinct n-grams on COCO and PASCAL-50S datasets for image captioning atB= 20 . While we report SPICE, we observe similar trends in other metrics (reported in supplement).Dataset Method Oracle Accuracy (SPICE) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 4.933 7.046 7.949 8.747 0.12 0.57 1.35 2.50Li & Jurafsky (2016) 5.083 7.248 8.096 8.917 0.15 0.97 2.43 5.31PASCAL-50S DBS 5.357 7.357 8.269 9.293 0.18 1.26 3.67 7.33Wu et al. (2016) 5.301 7.322 8.236 8.832 0.16 1.10 3.16 6.45Li et al. (2015) 5.129 7.175 8.168 8.560 0.13 1.15 3.58 8.42Beam Search 16.278 22.962 25.145 27.343 0.40 1.51 3.25 5.67Li & Jurafsky (2016) 16.351 22.715 25.234 27.591 0.54 2.40 5.69 8.94COCO DBS 16.783 23.081 26.088 28.096 0.56 2.96 7.38 13.44Wu et al. (2016) 16.642 22.643 25.437 27.783 0.54 2.42 6.01 7.08Li et al. (2015) 16.749 23.271 26.104 27.946 0.42 1.37 3.46 6.10Human Studies. To evaluate human preference between captions generated by DBS and BS, weperform a human study via Amazon Mechanical Turk using all 1000 images of PASCAL-50S. Foreach image, both DBS and standard BS captions are shown to 5 different users. They are then asked–“Which of the two robots understands the image better?” In this forced-choice test, DBS captionswere preferred over BS 60% of the time by human annotators.Is diversity always needed? While these results show that diverse outputs are important for systemsthat interact with users, is diversity always beneficial? While images with many objects ( e.g., a parkor a living room) can be described in multiple ways, the same is not true when there are few objects(e.g., a close up of a cat or a selfie). This notion is studied by Ionescu et al. (2016), which definesa “difficulty score”: the human response time for solving a visual search task. On the PASCAL-50S dataset, we observe a positive correlation ( = 0:73) between difficulty scores and humanspreferring DBS to BS. Moreover, while DBS is generally preferred by humans for ‘difficult’ images,both are about equally preferred on ‘easier’ images. Details are provided in the supplement.5.3 M ACHINE TRANSLATIONWe use the WMT’14 dataset containing 4.5M sentences to train our machine translation models.We train stacking LSTM models as detailed in Luong et al. (2015), consisting of 4 layers and 1024-dimensional hidden states. While decoding sentences, we employ the same strategy to replace UNKtokens. We train our models using the publicly available seq2seq-attn2code repository. We re-port results on news-test-2013 andnews-test-2014 and use the news-test-2012 to tune the parametersof DBS. We use sentence level BLEU scores to compute oracle metrics and report distinct n-grams1https://github.com/karpathy/neuraltalk22https://github.com/harvardnlp/seq2seq-attn7Under review as a conference paper at ICLR 2017similar to image captioning. Results are shown in Table 2 and we again find that DBS consistentlyoutperforms all baselines.Table 2: Quantitative results on English-German translation on the newstest-2013 and newstest-2014 datasetscombined (at B= 20 ).Method Oracle Accuracy (BLEU-4) Diversity Statistics@1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4Beam Search 20.5 22.4 23.8 24.2 0.04 0.75 2.10 3.23Li & Jurafsky (2016) 20.7 22.6 24.0 24.3 0.04 0.81 2.92 4.61DBS 20.8 22.9 24.4 24.8 0.06 0.95 3.67 5.54Wu et al. (2016) 20.6 22.6 24.3 24.6 0.05 0.88 2.83 4.50Li et al. (2015) 20.7 23.1 24.4 24.6 0.04 0.86 2.76 4.315.4 D IALOG GENERATIONDialog generation is a task that is inherently diverse as there are multiple valid ways to respond to astatement. We train a seq2seq model consisting of LSTMs as in Vinyals & Le (2015) on the CornellMovie Dialogs Corpus (Danescu-Niculescu-Mizil & Lee, 2011) using the neuralconvo3reposi-tory. The training dataset consists of 222,182 conversational exchanges between movie characters.Since automatic evaluation of dialog generation responses is an open research problem with exist-ing metrics being poorly correlated with human judgement (Liu et al., 2016), we show qualitativeresults to demonstrate the effectiveness of DBS. Table 3 compares BS and DBS at B=3.Table 3: Qualitative examples of dialog generation comparing best-first search, BS and DBSPrompt Beam Search Diverse Beam SearchWhy do you want to be the president?I don’t know. I don’t know.I don’t know... Because you know!I don’t know, but I am the best Because, I have another life.It’s such a beautiful day!I know! I know.It’s true. Well...Yeah. That’s not true, honey.What’s your name?Jacob. Jacob.Berger, my man. Berger, darlingBerger, Thomas. My mother used to hum that to me.5.5 V ISUAL QUESTION GENERATIONWe also report results on Visual Question Generation (VQG) (Mostafazadeh et al., 2016), wherea model is trained to produce questions about an image . Generating visually focused questions isinteresting because it requires reasoning about multiple problems that are central to vision – e.g.,object attributes, relationships between objects, and natural language. Furthermore, many questionscould make sense for one image, so it is important that lists of generated questions be diverse.We use the VQA dataset (Antol et al., 2015) to train a model similar to image captioning architec-tures. Instead of captions, the training set now consists of 3 questions per image. Similar to previousresults, using beam search to sample outputs results in similarly worded questions (see Fig. 3) andDBS brings out new details captured by the model. Counting the number of types of questions gen-erated (as defined by Antol et al. (2015)) allows us to measure this diversity. We observe that thenumber of question types generated per image increases from 2:3for BS to 3:7for DBS (atB= 6).6 C ONCLUSIONBeam search is widely a used approximate inference algorithm for decoding sequences from neuralsequence models; however, it suffers from a lack of diversity. Producing multiple highly similarand generic outputs is not only wasteful in terms of computation but also detrimental for tasks with3https://github.com/macournoyer/neuralconvo8Under review as a conference paper at ICLR 2017Input Image Beam Search Diverse Beam SearchWhat sport is this? What color is the man’s shirt?What sport is being played? What is the man holding?What color is the man’s shirt? What is the man wearing on his head?What color is the ball? Is the man wearing a helmetWhat is the man wearing? What is the man in the white shirt doing?What color is the man’s shorts? Is the man in the background wearing a helmet?How many zebras are there? How many zebras are there?How many zebras are in the photo? How many zebras are in the photo?How many zebras are in the picture? What is the zebra doing?How many animals are there? What color is the grass?How many zebras are shown? Is the zebra eating?What is the zebra doing? Is the zebra in the wild?Figure 3: Qualitative results on Visual Question Generation. DBS generates questions that are non-generic andbelong to different question types.inherent ambiguity like many involving language. In this work, we modify Beam Search with adiversity-augmented sequence decoding objective to produce Diverse Beam Search . We develop a‘doubly greedy’ approximate algorithm to minimize this objective and produce diverse sequencedecodings. Our method consistently outperforms beam search and other baselines across all ourexperiments without extra computation ortask-specific overhead . DBS is task-agnostic and can beapplied to any case where BS is used, which we demonstrate in multiple domains. Our implementa-tion available at https://github.com/ashwinkalyan/dbs .REFERENCESPeter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic proposi-tional image caption evaluation. In Proceedings of European Conference on Computer Vision(ECCV) , 2016. 6Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit-nick, and Devi Parikh. VQA: Visual question answering. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 2425–2433, 2015. 1, 8Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointlylearning to align and translate. Proceedings of the International Conference on Learning Repre-sentations (ICLR) , 2014. 1Dhruv Batra, Payman Yadollahpour, Abner Guzman-Rivera, and Gregory Shakhnarovich. DiverseM-Best Solutions in Markov Random Fields. In Proceedings of European Conference on Com-puter Vision (ECCV) , 2012. 2, 4Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A newapproach to understanding coordination of linguistic style in dialogs. In Proceedings of the Work-shop on Cognitive Modeling and Computational Linguistics, ACL 2011 , 2011. 8Francis Ferraro, Ishan Mostafazadeh, Nasrinand Misra, Aishwarya Agrawal, Jacob Devlin, RossGirshick, Xiadong He, Pushmeet Kohli, Dhruv Batra, and C Lawrence Zitnick. Visual story-telling. Proceedings of the Conference of the North American Chapter of the Association forComputational Linguistics – Human Language Technologies (NAACL HLT) , 2016. 2Jenny Rose Finkel, Christopher D Manning, and Andrew Y Ng. Solving the problem of cascadingerrors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings ofthe Conference on Empirical Methods in Natural Language Processing (EMNLP) , pp. 618–626,2006. 1K. Gimpel, D. Batra, C. Dyer, and G. Shakhnarovich. A systematic exploration of diversity in ma-chine translation. In Proceedings of the Conference on Empirical Methods in Natural LanguageProcessing (EMNLP) , 2013. 1, 5, 12Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deeprecurrent neural networks. abs/1303.5778, 2013. 19Under review as a conference paper at ICLR 2017Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim Papadopoulos, andVittorio Ferrari. How hard can it be? Estimating the difficulty of visual search in an image. InProceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. 7Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos,Greg Corrado, László Lukács, Marina Ganea, Peter Young, et al. Smart reply: Automated reep-onse suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Dis-covery and Data Mining (KDD) , 2016. 2Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descrip-tions. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ,2015. 2, 7Alexander Kirillov, Bogdan Savchynskyy, Dmitrij Schlesinger, Dmitry Vetrov, and Carsten Rother.Inferring m-best diverse labelings in a single one. In Proceedings of IEEE Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2015. 2, 4Jiwei Li and Dan Jurafsky. Mutual information and diverse decoding improve neural machine trans-lation. arXiv preprint arXiv:1601.00372 , 2016. 2, 5, 6, 7, 8, 13, 14Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objec-tive function for neural conversation models. Proceedings of the Conference of the North Amer-ican Chapter of the Association for Computational Linguistics – Human Language Technologies(NAACL HLT) , 2015. 2, 5, 6, 7, 8, 13, 14Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, PiotrDollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context, 2014. 7Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Michael Noseworthy, Laurent Charlin, and JoellePineau. How NOT to evaluate your dialogue system: An empirical study of unsupervised evalua-tion metrics for dialogue response generation. 2016. URL http://arxiv.org/abs/1603.08023 . 8Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 , 2015. 7Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre-sentations of words and phrases and their compositionality. In Advances in Neural InformationProcessing Systems (NIPS) , 2013. 12Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Van-derwende. Generating natural questions about an image. Proceedings of the Annual Meeting onAssociation for Computational Linguistics (ACL) , 2016. 8Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automaticevaluation of machine translation. In Proceedings of the Annual Meeting on Association forComputational Linguistics (ACL) , 2002. 6Dennis Park and Deva Ramanan. N-best maximal decoders for part models. In Proceedings of IEEEInternational Conference on Computer Vision (ICCV) , 2011. 4Adarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diversesubsets in exponentially-large structured item sets. In Advances in Neural Information ProcessingSystems (NIPS) , 2014. 2, 4Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based imagedescription evaluation. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 7Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell,and Kate Saenko. Sequence to sequence-video to text. In Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pp. 4534–4542, 2015. 110Under review as a conference paper at ICLR 2017Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869 , 2015.1, 8Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neuralimage caption generator. In Proceedings of IEEE Conference on Computer Vision and PatternRecognition (CVPR) , 2015. 1, 2, 7Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey,Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans-lation system: Bridging the gap between human and machine translation. arXiv preprintarXiv:1609.08144 , 2016. 5, 6, 7, 8, 13, 1411Under review as a conference paper at ICLR 2017APPENDIXSENSIVITY STUDIESNumber of Groups. Fig. 4 presents snapshots of the transition from BS to DBS at B= 6 andG=f1;3;6g. As beam width moves from 1 to G, the exploration of the method increases resultingin more diverse lists.Figure 4: Effect of increasing the number of groups G. The beams that belong to the same group are coloredsimilarly. Recall that diversity is only enforced across groups such that G= 1corresponds to classical BS.Diversity Strength. As noted in Section 5.1, our method is robust to a wide range of values of thediversity strength ( ). Fig. 5a shows a grid search of for image-captioning on the PASCAL-50Sdataset.Choice of Diversity Function. The diversity function can take various forms ranging from sim-ple hamming diversity to neural embedding based diversity. We discuss some forms for languagemodelling below:-Hamming Diversity. This form penalizes the selection of tokens used in previous groupsproportional to the number of times it was selected before.-Cumulative Diversity. Once two sequences have diverged sufficiently, it seems unnecessary andperhaps harmful to restrict that they cannot use the same words at the same time. To encodethis ‘backing-off’ of the diversity penalty we introduce cumulative diversity which keeps acount of identical words used at every time step, indicative of overall dissimilarity. Specifically,(Yh[t])[yg[t]] = expf(P2tPb2B0I[yhb;6=ygb;])=gwhere is a temperature parameter control-ling the strength of the cumulative diversity term and I[]is the indicator function.-n-gram Diversity. The current group is penalized for producing the same n-grams as previousgroups, regardless of alignment in time – similar to Gimpel et al. (2013). This is proportional tothe number of times each n-gram in a candidate occurred in previous groups. Unlike hammingdiversity, n-grams capture higher order structures in the sequences.-Neural-embedding Diversity. While all the previous diversity functions discussed above performexact matches, neural embeddings such as word2vec (Mikolov et al., 2013) can penalize semanti-cally similar words like synonyms. This is incorporated in each of the previous diversity functionsby replacing the hamming similarity with a soft version obtained by computing the cosine simi-larity between word2vec representations. When using with n-gram diversity, the representation ofthe n-gram is obtained by summing the vectors of the constituent words.Each of these various forms encode different notions of diversity. Hamming diversity ensures dif-ferent words are used at different times, but can be circumvented by small changes in sequencealignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment.Neural-embedding based encodings can be seen as a semantic blurring of either the hamming orn-gram metrics, with word2vec representation similarity propagating diversity penalties not only toexact matches but also to close synonyms. Fig. 5b shows the oracle performace of various forms ofthe diversity function described in Section 5.1. We find that using any of the above functions helpoutperform BS in the tasks we examine; hamming diversity achieves the best oracle performancedespite its simplicity.IMAGE CAPTIONING EVALUATIONWhile we report oracle SPICE values in the paper, our method consistently outperforms base-lines and classical BS on other standard metrics such as CIDEr (Table 4), METEOR (Table 5) andROUGE (Table 6). We provide these additional results in this section.12Under review as a conference paper at ICLR 2017(a) Grid search of diversity strength parameter (b) Effect of multiple forms for the diversity functionFigure 5: Fig. 5a shows the results of a grid search of the diversity strength ( ) parameter of DBS on thevalidation split of PASCAL 50S dataset. We observe that it is robust for a wide range of values. Fig. 5bcompares the performance of multiple forms for the diversity function ( ). While naïve diversity performs thebest, other forms are comparable while being better than BS.Table 4: CIDEr Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (CIDEr)@1 @5 @10 @20Beam Search 53.79 83.94 96.70 107.63Li & Jurafsky (2016) 54.61 85.21 99.80 110.64PASCAL-50S DBS 57.82 89.38 103.75 113.43Wu et al. (2016) 47.77 72.12 84.64 105.66Li et al. (2015) 49.80 81.35 96.87 107.37Beam Search 87.27 121.74 133.46 140.98Li & Jurafsky (2016) 91.42 111.33 116.94 119.14COCO DBS 86.88 123.38 135.68 142.88Wu et al. (2016) 87.54 122.06 133.21 139.43Li et al. (2015) 88.18 124.20 138.65 150.06Table 5: METEOR Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (METEOR)@1 @5 @10 @20Beam Search 12.24 16.74 19.14 21.22Li & Jurafsky (2016) 13.52 17.65 19.91 21.76PASCAL-50S DBS 13.71 18.45 20.67 22.83Wu et al. (2016) 13.34 17.20 18.98 21.13Li et al. (2015) 13.04 17.92 19.73 22.32Beam Search 24.81 28.56 30.59 31.87Li & Jurafsky (2016) 24.88 29.10 31.44 33.56COCO DBS 25.04 29.67 33.25 35.42Wu et al. (2016) 24.82 28.92 31.53 34.14Li et al. (2015) 24.93 30.11 32.34 34.88Modified SPICE evaluation. To measure both the quality and the diversity of the generated cap-tions, we compute SPICE-score by comparing the graph union of all the generated hypotheses withthe ground truth scene graph. This measure rewards all the relevant relations decoded as against ora-cle accuracy that compares to relevant relations present only in the top-scoring caption. We observethat DBS outperforms both baselines under this measure with a score of 18.345 as against a score of16.988 (beam search) and 17.452 (Li & Jurafsky, 2016).13Under review as a conference paper at ICLR 2017Table 6: ROUGE Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B= 20 .Dataset Method Oracle Accuracy (ROUGE-L)@1 @5 @10 @20Beam Search 45.23 56.12 59.61 62.04Li & Jurafsky (2016) 46.21 56.17 60.15 62.95PASCAL-50S DBS 46.24 56.90 60.35 63.02Wu et al. (2016) 43.73 52.29 56.49 61.65Li et al. (2015) 44.12 54.67 57.34 60.11Beam Search 52.46 58.43 62.56 65.14Li & Jurafsky (2016) 52.87 59.89 63.45 65.42COCO DBS 53.04 60.89 64.24 67.72Wu et al. (2016) 52.13 58.26 62.89 65.77Li et al. (2015) 53.10 59.32 63.04 66.19HUMAN STUDIESFor image-captioning, we conduct a human preference study between BS and DBS captions asexplained in Section 5. A screen shot of the interface used to collect human preferences for captionsgenerated using DBS and BS is presented in Fig. 6. The lists were shuffled to guard the task frombeing gamed by a turker.Table 7: Frequency table for image difficulty and human preference for DBS captions on PASCAL50S datasetdifficulty score # images % images DBSbin range was preffered 481 50.51%[;+] 409 69.92%+ 110 83.63%As mentioned in Section 5, we observe that difficulty score of an image and human preference forDBS captions are positively correlated. The dataset contains more images that are less difficultyand so, we analyze the correlation by dividing the data into three bins. For each bin, we report the% of images for which DBS captions were preferred after a majority vote ( i.e. at least 3/5 turkersvoted in favor of DBS) in Table 7. At low difficulty scores consisting mostly of iconic images – onemight expect that BS would be preferred more often than chance. However, mismatch between thestatistics of the training and testing data results in a better performance of DBS. Some examples forthis case are provided in Fig. 7. More general qualitative examples are provided in Fig. 8.DISCUSSIONAre longer sentences better? Many recent works propose a scoring or a ranking objective thatdepends on the sequence length. These favor longer sequences, reasoning that they tend to havemore details and resulting in improved accuracies. We measure the correlation between length ofa sequence and its accuracy (here, SPICE) and observe insignificant correlation between SPICEand sequence length. On the PASCAL-50S dataset, we find that BS and DBS have are negativelycorrelated (=0:003and=0:015respectively), while (Li & Jurafsky, 2016) is correlatedpositively (= 0:002). Length is not correlated with performance in this case.Efficient utilization of beam budget. In this experiment, we emperically show that DBS makesefficient use of the beam budget in exploring the search space for better solutions. Fig. 9 shows thevariation of oracle SPICE (@B) with the beam size. At really high beam widths, all decoding tech-niques achieve similar oracle accuracies. However, diverse decoding techniques like DBS achievethe same oracle at much lower beam widths. Hence, DBS not only produces sequence lists that aresignificantly different but also efficiently utilizes the beam budget to decode better solutions.14Under review as a conference paper at ICLR 2017Figure 6: Screen-shot of the interface used to perform human studies15Under review as a conference paper at ICLR 2017Figure 7: For images with low difficulty score, BS captions are preferred to DBS – as show in the first figure.However, we observe that DBS captions perform better when there is a mismatch between the statistics of thetesting and training sets. Interesting captions are colored in blue for readability.16Under review as a conference paper at ICLR 2017Figure 8: For images with a high difficulty score, captions produced by DBS are preferred to BS. Interestingcaptions are colored in blue for readability.17Under review as a conference paper at ICLR 2017(a) Oracle SPICE (@B) vs B (b) Oracle METEOR (@B) vs BFigure 9: As the number of beams increases, all decoding methods tend to achieve about the same oracleaccuracy. However, diverse decoding techniques like DBS utilize the beam budget efficiently achieving higheroracle accuracies at much lower beam budgets.18
ryVUd0MNx
HkSOlP9lg
ICLR.cc/2017/conference/-/paper353/official/review
{"title": "This paper is interesting but I remain some concerns regarding the author's response. ", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes the RIMs that unrolls variational inference procedure. \n\nThe author claims that the novelty lies in the separation of the model and inference procedure, making the MAP inference as an end-to-end approach. The effectiveness is shown in image restoration experiments.\n\nWhile unrolling the inference is not new, the author does raise an interesting perspective towards the `model-free' configuration, where model and inference are not separable and can be learnt jointly. \n\nHowever I do not quite agree the authors' argument regarding [1] and [2]. Although both [1] and [2] have pre-defined MAP inference problem. It is not necessarily that a separate step is required. In fact, both do not have either a pre-defined prior model or an explicit prior evaluation step as shown in Fig. 1(a). I believe that the implementation of both follows the same procedure as the proposed, that could be explained through Fig. 1(c). That is to say, the whole inference procedure eventually becomes a learnable neural network and the energy is implicitly defined through learning the parameters. \n\nMoreover, the RNN block architecture (GRU) and non-linearity (tanh) restrict the flexibility and implicitly form the inherent family of variational energy and inference algorithm. This is also similar with [1] and [2].\n\nBased on that fact, I have the similar feeling with R1 that the novelty is somewhat limited. Also some discussions should be added in terms of the architecture and nonlinearity that you have chosen. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Recurrent Inference Machines for Solving Inverse Problems
["Patrick Putzky", "Max Welling"]
Inverse problems are typically solved by first defining a model and then choosing an inference procedure. With this separation of modeling from inference, inverse problems can be framed in a modular way. For example, variational inference can be applied to a broad class of models. The modularity, however, typically goes away after model parameters have been trained under a chosen inference procedure. During training, model and inference often interact in a way that the model parameters will ultimately be adapted to the chosen inference procedure, posing the two components inseparable after training. But if model and inference become inseperable after training, why separate them in the first place? We propose a novel learning framework which abandons the dichotomy between model and inference. Instead, we introduce Recurrent Inference Machines (RIM), a class of recurrent neural networks (RNN) that directly learn to solve inverse problems. We demonstrate the effectiveness of RIMs in experiments on various image reconstruction tasks. We show empirically that RIMs exhibit the desirable convergence behavior of classical inference procedures, and that they can outperform state-of- the-art methods when trained on specialized inference tasks. Our approach bridges the gap between inverse problems and deep learning, providing a framework for fast progression in the field of inverse problems.
["Optimization", "Deep learning", "Computer vision"]
https://openreview.net/forum?id=HkSOlP9lg
https://openreview.net/pdf?id=HkSOlP9lg
https://openreview.net/forum?id=HkSOlP9lg&noteId=ryVUd0MNx
Under review as a conference paper at ICLR 2017RECURRENT INFERENCE MACHINESFOR SOLVING INVERSE PROBLEMSPatrick Putzky & Max WellingInformatics InstituteUniversity of Amsterdamfpputzky,m.welling g@uva.nlABSTRACTInverse problems are typically solved by first defining a model and then choosingan inference procedure. With this separation of modeling from inference, inverseproblems can be framed in a modular way. For example, variational inferencecan be applied to a broad class of models. The modularity, however, typicallygoes away after model parameters have been trained under a chosen inferenceprocedure. During training, model and inference often interact in a way that themodel parameters will ultimately be adapted to the chosen inference procedure,posing the two components inseparable after training. But if model and inferencebecome inseperable after training, why separate them in the first place?We propose a novel learning framework which abandons the dichotomy betweenmodel and inference. Instead, we introduce Recurrent Inference Machines (RIM) ,a class of recurrent neural networks (RNN), that directly learn to solve inverseproblems.We demonstrate the effectiveness of RIMs in experiments on various image recon-struction tasks. We show empirically that RIMs exhibit the desirable convergencebehavior of classical inference procedures, and that they can outperform state-of-the-art methods when trained on specialized inference tasks.Our approach bridges the gap between inverse problems and deep learning, pro-viding a framework for fast progression in the field of inverse problems.1 I NTRODUCTIONInverse Problems are a broad class of problems which can be encountered in all scientific disciplines,from the natural sciences to engineering. The task in inverse problems is to reconstruct a signalfrom observations that are subject to a known (or inferred) corruption process known as the forwardmodel. A typical example of an inverse problem is the linear measurement problemy=Ax+n; (1)where xis the signal of interest, Ais anmdcorruption matrix, nis an additive noise vector,andyis the actual measurement. If Ais a wide matrix such that md, this problem is typicallyill-posed. Many signal reconstruction problems can be phrased in terms of the linear measurementproblem such as image denoising, super-resolution, deconvolution and so on. The general form ofAtypically defines the problem class. If Ais an identity matrix the problem is a denoising problem,while in tomography Arepresents a Fourier transform and a consecutive sub-sampling of the Fouriercoefficients.Inverse problems are often formulated as an optimization problem of the formminxd(y;Ax) +R(x); (2)whered(y;Ax)is the data fidelity term that enforces xto satisfy the observations y, andR(x)is aregularization term which restricts the solution to comply with a predefined model over x.The difficulties that arise in this framework are two-fold: (1) it is difficult to choose R(x)such thatit is an appropriate model for complex signals such as natural images, and (2) even under a wellchosenR(x)the optimization procedure might become difficult.1Under review as a conference paper at ICLR 2017Compressed sensing approaches give up on a versatile R(x)in order to define a convex optimizationprocedure. The idea is that the signal xhas a sparse representation in some basis such that x= uand that the optimization problem can be rephrased asminud(y;Au) +kuk1; (3)wherekk1is the sparsity inducing L1-norm (Donoho, 2006a). Under certain classes of d(y;Au)such as quadratic errors the optimization problem becomes convex. Results from the compressedsensing literature offer provable bounds on the reconstruction performance for sparse signals of thisform (Cand `es et al., 2006; Donoho, 2006b). The basis can also be learned from data (Aharonet al., 2006; Elad & Aharon, 2006).Other approaches interpret equation (2) in terms of probabilities such that finding the solution is amatter of performing maximum a posteriori (MAP) estimation (Figueiredo et al., 2007). In thosecasesd(y;Au)takes the form of a log-likelihood and R(x)takes the form of a parametric log-prior logp(x)over variable xsuch that the minimization becomes:maxxlogp(yjA;x) + logp(x): (4)This allows for more expressiveness of R(x)and for the possibility of learning the prior p(x)fromdata. However, with more expressive priors optimization will become more difficult as well. In fact,only for a few trivial prior-likelihood pairs will inference remain convex. In practice one often hasto resort to approximations of the objective and to approximate double-loop algorithms in order toallow for scalable inference (Nickisch & Seeger, 2009; Zoran & Weiss, 2011).In this work we take a radically different approach to solving inverse problems. We move awayfrom the idea that it is beneficial to separate learning a prior (regularizer) from the optimization todo the reconstruction. The usual thinking is that this separation allows for greater modularity andthe possibility to interchange one of these two complementary components in order to build newalgorithms. In practice however, we observe that the optimization procedure almost always has tobe adapted to the model choice to achieve good performance (Aharon et al., 2006; Elad & Aharon,2006; Nickisch & Seeger, 2009; Zoran & Weiss, 2011). In fact, it is well known that the optimizationprocedure used for training should match the one used during testing because the model has adapteditself to perform well under that optimization procedure (Kumar et al., 2005; Wainwright, 2006).What we need is a single framework which allows us to backpropagate through the optimizationprocedure when we learn the free parameters. Hence, We propose to look at inverse problems as adirect mapping from observations to estimated signal,^x=f(A;y) (5)where ^xis an estimate of signal xfrom observations (A;y). Here we define as a set of learnableparameters which define the inference algorithm as well as constraints on x. The goal is thus todefine map whose parameters are directly optimized for solving the inverse problem itself. It has thebenefits of both having high expressive power (if the map fis complex enough) as well as beingfast at inference time.This paradigm shift allows us to learn and combine the effect of a prior, the reconstruction fidelityand an inference method without the need to explicitly define the functional form of all components.The whole procedure is simply interpreted as a single RNN. As a result, there is no need for sparsityassumptions, the introduction of model constraints to allow for convexity, or even for double-loopalgorithms (Gregor & LeCun, 2010). In fact the proposed framework allows for use of current deeplearning approaches which have high expressive power without trading off scalability. It furtherallows us to move all the manual parameter tuning - which is still common in traditional approaches(Zoran & Weiss, 2011) - away from the inference phase and into the learning phase. We believe thisframework can be an important asset to introduce deep learning into the domain of inverse problems.2Under review as a conference paper at ICLR 2017Figure 1: (A)Graphical illustration of the recurrent structure of MAP estimation (compare equation(6)). The three boxes represent likelihood model p(yjx)(Aomitted), prior p(x), and updatefunction , respectively. In each iteration, likelihood and prior collect the current estimate of x,to send a gradient to update function (see text). then produces a new estimate of x. Typically,priorp(x)and update function are modeled as two distinct model components. Here they areboth depicted in gray boxes because they each represent model internal information which we wishto be transferable between different observations, i.e. they are observation independent. Likelihoodtermp(yjx)is depicted in blue to emphasize it as a model extrinsic term, some aspects of thelikelihood term can change from one observation to the other (such as matrix A). The likelihoodterm is observation-dependent. (B)Model simplification. The central insight of this work is to mergepriorp(x)and update function into one model with trainable parameters . The model theniteratively produces new estimates through feedback from likelihood model p(yjx)and previousupdates. (C)A Recurrent Inference Machine unrolled in time. Here we have added an additionalstate variable which represents information that is carried over time, but is not directly subjectedto constraints through the likelihood term p(yjx). During training, estimates at each time step aresubject to an error signal from the ground truth signal x(dashed two-sided arrows) in order toperform backpropagation. The intermittent error signal will force the model to perform well as soonas possible during iterations. At test time, there is no error signal from x.2 R ECURRENT INFERENCE MACHINESThe goal of this work is to find an inverse model as described in equation (5). Often, however, itwill be intractable to find (5) directly, even with modern non-linear function approximators. Forhigh-dimensional yandx, which are typically considered in inverse problems, it will simply notbe possible to fit matrix Ainto memory explicitly, but instead matrix Awill be replaced by anoperator that acts on x. An example is the Discrete Fourier Transform (DFT). Instead of using aFourier matrix which is quadratic in the size of x, DFTs are typically performed using the FastFourier Transform (FFT) algorithm which reduces computational cost and memory consumptionsignificantly. The use of operators, however, does not allow us to feed Ainto (5) anymore, butinstead we will have to resort to an iterative approach that alternates between updates of xand3Under review as a conference paper at ICLR 2017evaluation of Ax. This is precisely what is typically done in gradient-based inference methods, andwe will motivate our framework from there.2.1 G RADIENT -BASED INFERENCERecall from equation (4) that inverse problems can be interpreted in terms of probability such thatoptimization is an iterative approach to MAP inference. In its most simple form each consecutiveestimate of xis then computed through a recursive function of the formxt+1=xt+trlogp(yjA;x) + logp(x)(xt) (6)where we make use of the fact that p(xjA;y)/p(yjA;x)p(x)andtis the step size or learningrate at iteration t. Further, Ais a (partially-)observable covariate, p(yjA;x)is the likelihood func-tion for a given inference problem, and p(x)is a prior over signal x. In many cases where eitherthe likelihood term or the prior term deviate from standard models, optimization will not be convex.In constrast, the approach presented in this work is completely freed from ideas about convexity, aswill be shown in the next section.2.2 R ECURRENT FUNCTION DEFINITIONThe central insight of this work is that update equation (6) can be generalized such thatxt+1=xt+g(ryjx;xt) (7)where we denoterlogp(yjA;x)(xt)byryjxfor readability and is a set of learnable parametersthat govern the updates of x. In this representation, prior parameters and learning rate parametershave been merged into one set of trainable parameters .To recover the original update equation (6), g(ryjx;xt)is written asg(ryjx;xt) =tryjx+rx(8)where we make use of rxto denoterlogp(x)(xt). It will be useful to dissect the terms on theright-hand side of (8) to make sense of the usefulness of the modification.First notice, that in equation (6) we never explicitly evaluate the prior, but only evaluate its gradientin order to perform updates. If never used, learning a prior appears to be unnecessary, and insteadit appears more reasonable to directly learn a gradient function rx=f(xt)2Rd. The advantageof working solely with gradients is that they do no require the evaluation of an (often) intractablenormalization constant of p(x).A second observation is that the step sizes tare usually subject to either a chosen schedule orchosen through a deterministic algorithm such as a line search. That means the step sizes are alwayschosen according to a predefined model . Interestingly, this model is usually not learned. In orderto make inference faster and improve performance we suggest to learn the model as well.In (7) we have made the prior p(x)and the the step size model implicit in function g(ryjx;t).We explicitly keep ryjxas an input to (7) because - as opposed to andp(x)- it representsextrinsic information that is injected into the model. It allows for changes in the likelihood modelp(yjx)without the need to retrain parameters of the inference model g. Figure 1 gives a visualsummary of the insights from this section.2.3 O UTPUT CONSTRAINTSIn many problem domains the range of values for variable xis naturally constraint. For example,images typically have pixels with strictly positive values. In order to model this constraint we makeuse of nonlinear link functions as they are typically used in neural networks, such thatx= ( ) (9)where ()is any differentiable link function and is the space in which RIMs iterate such thatupdate equation (7) is replaced byt+1=t+g(ryj;t) (10)As a result xcan be constraint to a certain range of values through (), whereas iterations areperformed in the unconstrained space of 4Under review as a conference paper at ICLR 20172.4 R ECURRENT NETWORKSA useful extension of (7) is to introduce a latent state variable stinto the procedure. This latentvariable is typically used as a utility in recurrent neural networks to learn temporal dependencies indata processing. With an additional latent variable the update equations becomet+1=t+hryj;t;st+1(11)st+1=hryj;t;st(12)whereh()is the update model for state variable s. The variable swill allow the procedure to havememory in order to track progression, curvature, approximate a preconditioning matrix Tt(suchas in BFGS) and determine a stopping criterion among other things. The concept of a temporalmemory is quite limited in classical inference methods, which will allow RIMs to have a potentialadvantage over these methods.2.5 T RAININGIn order to learn a step-wise inference procedure it will be necessary to simulate the inference stepsduring training. I.e. during training, an RIM will perform a number of inference steps T. At eachstep the model will produce a prediction as depicted in figure Figure 1. Each of those predictions isthen subject to a loss, which encourages the model to produce predictions that improve over time. Init’s simplest form we can define a loss which is simply a weighted sum of the individual predictionlosses at each time step such thatLtotal() =TXt=1wtL(xt();x) (13)is the total loss. Here, L()is a base loss function such as the mean square error, wtis a positivescalar and xt()is a prediction at time t. In this work we follow Andrychowicz et al. (2016) insettingwt= 1for all time steps.3 R ELATED WORKThe RIM framework can be seen as an auto-encoder framework in which only the decoder is trained,whereas the encoder is given by a known corruption process. In terms of the training procedure thismakes RIMs very similar to denoising auto-encoders (Vincent et al., 2008). Though initially withthe objective of regularization in mind, denoising auto-encoders have been shown to be effectivelyused as generative models (Vincent et al., 2010). The difference of RIMs to denoising auto-encodersand also more recently developed auto-encoders such as Kingma & Welling (2014); Rezende et al.(2014) is that RIMs enforce coupling between encoder and decoder both, during training and testtime. In it’s typical form, decoder and encoder of an auto-encoder are only coupled during trainingtime, while there is no information flow during test time (Kingma & Welling, 2014; Rezende et al.,2014; Vincent et al., 2008; 2010). An exception is the work from Gregor et al. (2016) which isconceptually strongly related to RIMs. There, an RNN model is used to generate static data bydrawing on a fixed canvas. An error signal is propagated throughout the generation process.There have been approaches in the past which aim to formulate a framework in which an inferenceprocedure is learned. One of the best known frameworks is LISTA (Gregor & LeCun, 2010) whichaims to learn a model that reconstructs sparse codes from data. LISTA models try to fit into theclassical framework of doing inference as described in 1, whereas RIMs are completely removedfrom assumptions about sparsity. A recent paper by Andrychowicz et al. (2016) aims to train RNNsas optimizers for non-convex optimization problems. Though introduced with a different intention,RIMs can be seen as a generalization of this approach, in which the model - in addition to thegradient information - is aware about the absolute position of a prediction in variable space(seeequation (7)).4 E XPERIMENTAL RESULTSWe evaluate our method on various kinds of image restoration tasks which can each be formulated interms of linear measurement problems as described in equation (1). We first analyze the properties5Under review as a conference paper at ICLR 2017of our proposed method on a set of restoration tasks from random projections. Later we compareour model on two well known image restoration tasks: image denoising and image super-resolution.4.1 M ODELSIf not specified otherwise we use the same RNN architecture for all experiments presented in thiswork. The chosen RNN consists of three convolutional hidden layers and a final convolutional outputlayer. All convolutional filters were chosen to be of size 3 x 3 pixels. The first hidden layer consistsof convolutions with stride 2 (64 features), subsequent batch normalization and a tanh nonlinearity.The second hidden layer represents the RNN part of the model. We chose a gated recurrent unit(GRU) (Chung et al., 2014) with 256 features. The third hidden layer is a transpose convolutionlayer with 64 features which aims to recover the original image dimensions of the signal, followedagain by a batch normalization layer and a tanh nonlinearity. All models have been trained on afixed number of iterations of 20 steps. All methods were implemented in Tensorflow1.4.2 D ATAAll experiments were run on the BSD-300 data set (Martin et al., 2001)2. For training we extractedpatches of size 32 x 32 pixels with stride 4 from the 200 training images available in the dataset. In total this amounts to a data set of about 400 thousand image patches with highly redundantinformation. All models were trained over only two epochs, i.e. each unique image patch was seenby a model only twice during training. Validation was performed on a held-out data set of 1000image patches.For testing we either used the whole test set of 100 images from BSDS-300 or we used only a subsetof 68 images which was introduced by Roth & Black (2005) and which is commonly used in theimage restoration community3.4.3 I MAGE RESTORATIONAll tasks addressed in this work assume a linear measurement problem of the form as described inequation (1) with additive (isotropic) Gaussian noise. In this case the gradient of the likelihood takesthe formryjx=12AT(yAx) (14)where2is the noise variance. For very small this gradient diverges. In order to make the gradientmore stable also for small we chose to rewrite it asryjx=12+AT(yAx) (15)where=softplus ()andis a trainable parameter. As a link function (see (9)) we chose thelogistic sigmoid nonlinearity4and we used the mean square error as training loss.4.4 M ULTI -TASK LEARNING WITH RANDOM PROJECTIONSTo analyze the properties of our proposed framework in terms of convergence and to test whether allcomponents of the model are useful, we first trained the model to reconstruct image patches fromnoisy random projections of grayscale image patches. We consider three types of random projectionmatrices: (1) Gaussian ensembles with elements drawn from a standard normal distribution, (2)binary ensembles with entries of values f1;1gdrawn from a Bernulli distribution with p= 0:5,and (3) Fourier ensembles with randomly sampled rows from a Fourier matrix (see Donoho (2006b)).We trained three models on these tasks: (1) a Recurrent Inference Machine (RIM) as described in 2,(2) a gradient-descent network (GDN) which does not use the current estimate as an input (compare1https://www.tensorflow.org2https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/3http://www.visinf.tu-darmstadt.de/vi research/code/foe.en.jsp4All training data was rescaled to be in the range [0;1]6Under review as a conference paper at ICLR 20175152535PNSR (dB), p = 0.1Gaussian RP Binary RP Fourier RP0 10 20 30 40 50Steps5152535PNSR (dB), p = 0.40 10 20 30 40 50Steps0 10 20 30 40 50StepsRIMGDNFFNFigure 2: Reconstruction performance over time on random projections. Shown are results of thethree reconstruction tasks from random projections (see text) on 5000 random patches from theBSD-300 test set. Value of p represent the the reduction in dimensionality through the random pro-jection. Noise standard deviation was chosen to be = 1. Solid lines correspond to the mean peaksignal-to-noise-ration (PSNR) over time, and shaded areas correspond to one standard deviationaround the mean. Vertical dashed lines mark the last time step that was used during training.Andrychowicz et al. (2016)), and (3) a feed-forward network (FFN) which uses the same inputs asthe RIM but where we replaced the GRU unit with a ReLu layer in order to remove state-dependence.Model (2) and (3) are simplifications of RIM in order to test the influence of each of the removedmodel components on prediction performance.Figure 2 shows the reconstruction performance of all three models on random projections. In alltasks the RIM clearly outperforms both other models, showing overall consistent convergence be-havior. The FFN performs well on easier tasks but starts to show degrading performance over timeon more difficult tasks. This suggests that the state information of RIM plays an important roleon the convergence behavior as well as overall performance. The GDN shows worst performanceamong all three models. For all tasks, the performance of GDN starts to degrade clearly after the 20time steps that were used during training. We hypothesize that the model is able to compensate someof the missing information about the current estimate of xthrough state variable sduring training,but the model is not able to transfer this ability to episodes with more iterations.These results suggests that both the current estimate as well as the recurrent state carry useful in-formation for performing inference. We will therefor only consider fully fledged RIMs from hereon.4.5 I MAGE DENOISINGAfter evaluating our model on 32 x 32 pixel image patches we wanted to see how reconstruc-tion performance generalizes to full sized images and to an out of domain problem. We choseto reuse the RIM that was trained on the random projections task to perform image denoising. Inthis section we will call this model RIM-3task. To test the hypothesis that inference should betrained task specific, we further trained a model RIM-denoise solely on the denoising task. Ta-ble 2 shows the denoising performance through the mean PSNR on the BSD-300 test set for bothmodels as compared to state-of-the-art methods in image denoising. The RIM-3task model showsvery competitive results with other methods on all noise levels. This exemplifies that the modelindeed has learned something reminiscent of a prior, as it was never directly trained on this task.The RIM-denoise model further improves upon the performance of RIM-3task and it outperformsmost other methods on all noise levels. This is to say that the same RIM was used to performdenoising on different noise levels, and this model does not require any hand tuning after training.7Under review as a conference paper at ICLR 2017(a) Ground truth (b) Noisy image, 14.88dB(c) EPLL, 25.68dB (d) RIM, 25.91dBFigure 3: Denoising performance on example image use in Zoran & Weiss (2011). = 50 . Noisyimage was 8-bit quantized before reconstruction.Method PSNRCBM3D 30:18RTF-5 30:57RIM (ours) 30:84(30:67)Table 1: Color denoising. Denoisingperformance on the 68 images for =25after 8-bit quantization. Resultsfor RTF-5 (Schmidt et al., 2016) andCBM3D (Dabov et al., 2007b) adoptedfrom Schmidt et al. (2016). In paren-thesis are results for the full 100 testimages.Table 2 shows denoising perfomance on image that havebeen 8-bit quantized after adding noise(see Schmidt et al.(2016)). In this case performance slightly deteriorates forboth models, though still making competitive with state-of-the-art methods. This effect could possibly be accom-modated through further training, or by adjusting the for-ward model. Figure 3 gives some qualitative results onthe denoising performance for one of the test images fromBSD-300 as compared to the method from Zoran & Weiss(2011). RIM is able to produce more naturalistic imageswith less visible artifacts. The state variable in our RIMmodel allows for a growing receptive field size over time,which could explain the good long range interactions thatthe model shows.Many denoising algorithms are solely tested on gray-scaleimages. Sometimes this is due to additional difficultiesthat multi-channel problems bring for some inference approaches. To show that it is straightforwardto apply RIMs to multi-channel problems we trained a model to denoise RGB images. The denoisingperformance can be seen in table 1. The model is able to exploit correlations across color channelswhich allows for an additional boost in reconstruction performance.4.6 I MAGE SUPER -RESOLUTIONWe further tested our approach on the well known image super-resolution task. We trained a singleRIM5on 36 x 36 pixel image patches from the BSD-300 training set to perform image super-5The architecture of this model was slightly simplified in comparison to the previous problems. Instead ofstrided convolutions, we chose a trous convolutions. This model is more flexible and used only about 500:000parameters. Previous experiments will be updated with the same model architecture.8Under review as a conference paper at ICLR 2017Not Quantized 8-bit Quantized 15 25 50 15 25 50KSVD 30:87 28 :28 25 :175x5 FoE 30:99 28 :40 25 :35 28 :22BM3D 31:08 28 :56(28:35) 25:62(25:45) 28 :31LSSC 31:27 28 :70 25 :72 28 :23EPLL 31:19 28 :68(28:47) 25:67(25:50)opt-MRF 31:18 28 :66 25 :70MLP 28:85(28:75) (25 :83)RTF-5 28:75 28 :74RIM-3task 31:19(30:98) 28:67(28:45) 25:78(25:59) 31:06(30:88) 28:41(28:24) 24:86(24:73)RIM-denoise 31:31(31:10) 28:91(28:72) 26:06(25:88) 31:25(31:05) 28:76(28:58) 25:27(25:14)Table 2: Denoising performance on gray-scale images from BSD-300 test set. Shown are meanPSNR values for different noise values. Number outside of parenthesis correspond to test perfor-mance on the 68 test images from Roth & Black (2005), and numbers in parenthesis correspondto performance on all 100 test images from BSD-300. 68 image performance for KSVD (Elad &Aharon, 2006), FoE (Roth & Black, 2005), BM3D (Dabov et al., 2007a), LSSC (Mairal et al., 2009),EPLL (Zoran & Weiss, 2011), and opt-MRF (Chen et al., 2013) adopted from Chen et al. (2013).Performances on 100 images adopted from Burger et al. (2013). 68 image performance on MLP(Burger et al., 2012), RTF-5 (Schmidt et al., 2016) and all quantized results adopted from Schmidtet al. (2016).(a) Original Image (b) Bicubic: 30:43=0:8326 (c) SRCNN: 31:34=0:8660(d) A+: 31:43=0:8676 (e) SelfExSR: 31:18=0:8656 (f) RIM: 31:59=0:8712Figure 4: Super-resolution example with factor 3. Comparison with the same methods as in table 3.Reported numbers are PSNR/SSIM. Best results in bold.resolution for factors 2, 3, and 46. We followed the same testing protocol as in Huang et al. (2015),and we used the test images that were retrieved from their website7. Table 3 shows a comparisonwith some state-of-the-art methods on super-resolution for the BSD-300 test set. Figure 4 shows aqualitative example of super-resolution performance. The other deep learning method in this com-parison, SRCNN Dong et al. (2014), is outperformed by RIM on all scales. Interestingly SRCNNwas trained for each scale independently whereas we only trained one RIM for all scales. The cho-sen RIM has only about 500:000parameters which amounts to about 2MB of disk space, whichmakes this architecture very attractive also for mobile computing.6We reimplemented MATLABs bicubic interpolation kernel in order to apply a forward model (sub-sampling) in TensorFlow which agrees with the forward model in Huang et al. (2015).7https://sites.google.com/site/jbhuang0604/publications/struct sr9Under review as a conference paper at ICLR 2017Metric Scale Bicubic SRCNN A+ SelfExSR RIM (Ours)PSNR2x 29:550:35 31:110:39 31:220:40 31:180:39 31:390:393x 27:200:33 28:200:36 28:300:37 28:300:37 28:510:374x 25:960:33 26:700:34 26:820:35 26:850:36 27:010:35SSIM2x 0:84250:0078 0:88350:0062 0:88620:0063 0:88550:0064 0:88850:00623x 0:73820:0114 0:77940:0102 0:78360:0104 0:78430:0104 0:78880:01014x 0:66720:0131 0:70180:0125 0:70890:0125 0:71080:0124 0:71560:0125Table 3: Image super-resolution performance on RGB images from BSD-300 test set. Mean andstandard deviation (of the mean) of Peak Signal-to-Noise Ratio (PSNR) and Structural SimilarityIndex (SSIM) Wan (2004). Standard deviation of the mean was estimated from 10:000boostrapsamples. Test protocol and images taken from Huang et al. (2015). Only the three best performingmethods from Huang et al. (2015) were chosen for comparison: SRCNN Dong et al. (2014), A+Timofte et al. (2015), SelfExSR Huang et al. (2015). Best mean values in bold.5 D ISCUSSIONIn this work, we introduce a general learning framework for solving inverse problems with deeplearning approaches. We establish this framework by abandoning the traditional separation betweenmodel and inference. Instead, we propose to learn both components jointly without the need todefine their explicit functional form. This paradigm shift enables us to bridge the gap between thefields of deep learning and inverse problems. We believe that this framework can have a majorimpact on many inverse problems, for example in medical imaging and radio astronomy. Althoughwe have focused on linear image reconstruction tasks in this work, the framework can be applied toinverse problems of all kinds, such as non-linear inverse problems.ACKNOWLEDGMENTSThe research was funded by the DOME project (Astron & IBM) and the Netherlands Organizationfor Scientific Research (NWO). The authors are greatful for helpful comments from Thomas Kipf,Mijung Park, Rajat Thomas, and Karen Ullrich.REFERENCESImage quality assessment: form error visibility to structural similarity. IEEE Transactions on ImageProcessing , 13(4):600–612, 2004.Michal Aharon, Michael Elad, and Alfred Bruckstein. K-SVD: An algorithm for designing over-complete dictionaries for sparse representation. IEEE Transactions on Signal Processing , 54(11):4311–4322, nov 2006.Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, TomSchaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. jun2016.Harold Christopher Burger, Christian Schuler, and Stefan Harmeling. Image denoising: Can plainneural networks compete with BM3D? In IEEE Conference on Computer Vision and PatternRecognition , pp. 2392–2399. IEEE, jun 2012.Harold Christopher Burger, Christian J. Schuler, and Stefan Harmeling. Learning how to combineinternal and external denoising methods. In Joachim Weickert, Matthias Hein, and Bernt Schiele(eds.), GCPR , volume 8142 of Lecture Notes in Computer Science , pp. 121–130. Springer, 2013.Emmanuel J. Cand `es, Justin K. Romberg, and Terence Tao. Stable signal recovery from incompleteand inaccurate measurements. Communications on Pure and Applied Mathematics , 59(8):1207–1223, aug 2006.10Under review as a conference paper at ICLR 2017Yunjin Chen, Thomas Pock, Ren ́e Ranftl, and Horst Bischof. Revisiting Loss-Specific Training ofFilter-Based MRFs for Image Restoration. In 35th German Conference on Pattern Recognition(GCPR) , pp. 271–281, 2013.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical Evaluation ofGated Recurrent Neural Networks on Sequence Modeling. dec 2014.K. Dabov, A. Foi, V . Katkovnik, and K. Egiazarian. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing , 16(8):2080–2095, aug2007a.Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color Image Denois-ing via Sparse 3D Collaborative Filtering with Grouping Constraint in Luminance-ChrominanceSpace. In 2007 IEEE International Conference on Image Processing , pp. I – 313–I – 316. IEEE,sep 2007b.Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutionalnetwork for image super-resolution. ECCV , pp. 184–199, 2014.David L. Donoho. For most large underdetermined systems of linear equations the minimal L1-normsolution is also the sparsest solution. Communications on Pure and Applied Mathematics , 59(6):797–829, jun 2006a.D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory , 52(4):1289–1306,apr 2006b.Michael Elad and Michal Aharon. Image Denoising Via Sparse and Redundant RepresentationsOver Learned Dictionaries. IEEE Transactions on Image Processing , 15(12):3736–3745, dec2006.M ́ario A. T. Figueiredo, Robert D. Nowak, and Stephen J. Wright. Gradient Projection for SparseReconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE Journalof Selected Topics in Signal Processing , 1(4):586–597, dec 2007.Karol Gregor and Yann LeCun. Learning Fast Approximations of Sparse Coding. In Proceedingsof the 27th International Conference on Machine Learning (ICML-10) , pp. 399–406, 2010.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. TowardsConceptual Compression. apr 2016.Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from trans-formed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , pp. 5197–5206. IEEE, jun 2015.Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In The 2nd InternationalConference on Learning Representations (ICLR) , 2014.Sanjiv Kumar, Jonas August, and Martial Hebert. Exploiting Inference for Approximate ParameterLearning in Discriminative Fields: An Empirical Study. In Proceedings of the 5th internationalconference on Energy Minimization Methods in Computer Vision and Pattern Recognition , pp.153–168. Springer-Verlag, 2005.Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-localsparse models for image restoration. In 2009 IEEE 12th International Conference on ComputerVision , pp. 2272–2279. IEEE, sep 2009.David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In Proc. 8th Int’l Conf. Computer Vision , volume 2, pp. 416–423, July 2001.Hannes Nickisch and Matthias W. Seeger. Convex variational Bayesian inference for large scalegeneralized linear models. In Proceedings of the 26th International Conference on MachineLearning , pp. 761–768, New York, New York, USA, jun 2009. ACM Press.11Under review as a conference paper at ICLR 2017D J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inferencein deep generative models. In Proceedings of The 31st International Conference on MachineLearning , pp. 1278–1286, 2014.Stefan Roth and Michael J. Black. Fields of experts: A framework for learning image priors. In Pro-ceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition ,volume 2, pp. 860–867. IEEE, 2005.Uwe Schmidt, Jeremy Jancsary, Sebastian Nowozin, Stefan Roth, and Carsten Rother. Cascades ofregression tree fields for image restoration. IEEE Transactions on Pattern Analysis and MachineIntelligence , 38(4):677–689, 2016.Radu Timofte, Vincent de Smet, and Luc van Gool. A+: Adjusted anchored neighborhood regressionfor fast super-resolution. In ACCV , volume 9006, pp. 111–126. 2015.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103, New York, New York, USA, 2008. ACM Press.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with aLocal Denoising Criterion. The Journal of Machine Learning Research , 11:3371–3408, 2010.MJ Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting.The Journal of Machine Learning Research , 2006.Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole imagerestoration. In 2011 International Conference on Computer Vision , pp. 479–486. IEEE, nov 2011.12
BkUErU-Ex
HkSOlP9lg
ICLR.cc/2017/conference/-/paper353/official/review
{"title": "Interesting work", "rating": "7: Good paper, accept", "review": "This paper presents a method to learn both a model and inference procedure at the same time with recurrent neural networks in the context of inverse problems.\nThe proposed method is interesting and results are quite good. The paper is also nicely presented. \n\nI would be happy to see some discussion about what the network learns in practice about natural images in the case of denoising. What are the filters like? Is it particularly sensitive to different structures in images? edges? Also, what is the state in the recurrent unit used for? when are the gates open etc.\n\nNevertheless, I think this is nice work which should be accepted.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Recurrent Inference Machines for Solving Inverse Problems
["Patrick Putzky", "Max Welling"]
Inverse problems are typically solved by first defining a model and then choosing an inference procedure. With this separation of modeling from inference, inverse problems can be framed in a modular way. For example, variational inference can be applied to a broad class of models. The modularity, however, typically goes away after model parameters have been trained under a chosen inference procedure. During training, model and inference often interact in a way that the model parameters will ultimately be adapted to the chosen inference procedure, posing the two components inseparable after training. But if model and inference become inseperable after training, why separate them in the first place? We propose a novel learning framework which abandons the dichotomy between model and inference. Instead, we introduce Recurrent Inference Machines (RIM), a class of recurrent neural networks (RNN) that directly learn to solve inverse problems. We demonstrate the effectiveness of RIMs in experiments on various image reconstruction tasks. We show empirically that RIMs exhibit the desirable convergence behavior of classical inference procedures, and that they can outperform state-of- the-art methods when trained on specialized inference tasks. Our approach bridges the gap between inverse problems and deep learning, providing a framework for fast progression in the field of inverse problems.
["Optimization", "Deep learning", "Computer vision"]
https://openreview.net/forum?id=HkSOlP9lg
https://openreview.net/pdf?id=HkSOlP9lg
https://openreview.net/forum?id=HkSOlP9lg&noteId=BkUErU-Ex
Under review as a conference paper at ICLR 2017RECURRENT INFERENCE MACHINESFOR SOLVING INVERSE PROBLEMSPatrick Putzky & Max WellingInformatics InstituteUniversity of Amsterdamfpputzky,m.welling g@uva.nlABSTRACTInverse problems are typically solved by first defining a model and then choosingan inference procedure. With this separation of modeling from inference, inverseproblems can be framed in a modular way. For example, variational inferencecan be applied to a broad class of models. The modularity, however, typicallygoes away after model parameters have been trained under a chosen inferenceprocedure. During training, model and inference often interact in a way that themodel parameters will ultimately be adapted to the chosen inference procedure,posing the two components inseparable after training. But if model and inferencebecome inseperable after training, why separate them in the first place?We propose a novel learning framework which abandons the dichotomy betweenmodel and inference. Instead, we introduce Recurrent Inference Machines (RIM) ,a class of recurrent neural networks (RNN), that directly learn to solve inverseproblems.We demonstrate the effectiveness of RIMs in experiments on various image recon-struction tasks. We show empirically that RIMs exhibit the desirable convergencebehavior of classical inference procedures, and that they can outperform state-of-the-art methods when trained on specialized inference tasks.Our approach bridges the gap between inverse problems and deep learning, pro-viding a framework for fast progression in the field of inverse problems.1 I NTRODUCTIONInverse Problems are a broad class of problems which can be encountered in all scientific disciplines,from the natural sciences to engineering. The task in inverse problems is to reconstruct a signalfrom observations that are subject to a known (or inferred) corruption process known as the forwardmodel. A typical example of an inverse problem is the linear measurement problemy=Ax+n; (1)where xis the signal of interest, Ais anmdcorruption matrix, nis an additive noise vector,andyis the actual measurement. If Ais a wide matrix such that md, this problem is typicallyill-posed. Many signal reconstruction problems can be phrased in terms of the linear measurementproblem such as image denoising, super-resolution, deconvolution and so on. The general form ofAtypically defines the problem class. If Ais an identity matrix the problem is a denoising problem,while in tomography Arepresents a Fourier transform and a consecutive sub-sampling of the Fouriercoefficients.Inverse problems are often formulated as an optimization problem of the formminxd(y;Ax) +R(x); (2)whered(y;Ax)is the data fidelity term that enforces xto satisfy the observations y, andR(x)is aregularization term which restricts the solution to comply with a predefined model over x.The difficulties that arise in this framework are two-fold: (1) it is difficult to choose R(x)such thatit is an appropriate model for complex signals such as natural images, and (2) even under a wellchosenR(x)the optimization procedure might become difficult.1Under review as a conference paper at ICLR 2017Compressed sensing approaches give up on a versatile R(x)in order to define a convex optimizationprocedure. The idea is that the signal xhas a sparse representation in some basis such that x= uand that the optimization problem can be rephrased asminud(y;Au) +kuk1; (3)wherekk1is the sparsity inducing L1-norm (Donoho, 2006a). Under certain classes of d(y;Au)such as quadratic errors the optimization problem becomes convex. Results from the compressedsensing literature offer provable bounds on the reconstruction performance for sparse signals of thisform (Cand `es et al., 2006; Donoho, 2006b). The basis can also be learned from data (Aharonet al., 2006; Elad & Aharon, 2006).Other approaches interpret equation (2) in terms of probabilities such that finding the solution is amatter of performing maximum a posteriori (MAP) estimation (Figueiredo et al., 2007). In thosecasesd(y;Au)takes the form of a log-likelihood and R(x)takes the form of a parametric log-prior logp(x)over variable xsuch that the minimization becomes:maxxlogp(yjA;x) + logp(x): (4)This allows for more expressiveness of R(x)and for the possibility of learning the prior p(x)fromdata. However, with more expressive priors optimization will become more difficult as well. In fact,only for a few trivial prior-likelihood pairs will inference remain convex. In practice one often hasto resort to approximations of the objective and to approximate double-loop algorithms in order toallow for scalable inference (Nickisch & Seeger, 2009; Zoran & Weiss, 2011).In this work we take a radically different approach to solving inverse problems. We move awayfrom the idea that it is beneficial to separate learning a prior (regularizer) from the optimization todo the reconstruction. The usual thinking is that this separation allows for greater modularity andthe possibility to interchange one of these two complementary components in order to build newalgorithms. In practice however, we observe that the optimization procedure almost always has tobe adapted to the model choice to achieve good performance (Aharon et al., 2006; Elad & Aharon,2006; Nickisch & Seeger, 2009; Zoran & Weiss, 2011). In fact, it is well known that the optimizationprocedure used for training should match the one used during testing because the model has adapteditself to perform well under that optimization procedure (Kumar et al., 2005; Wainwright, 2006).What we need is a single framework which allows us to backpropagate through the optimizationprocedure when we learn the free parameters. Hence, We propose to look at inverse problems as adirect mapping from observations to estimated signal,^x=f(A;y) (5)where ^xis an estimate of signal xfrom observations (A;y). Here we define as a set of learnableparameters which define the inference algorithm as well as constraints on x. The goal is thus todefine map whose parameters are directly optimized for solving the inverse problem itself. It has thebenefits of both having high expressive power (if the map fis complex enough) as well as beingfast at inference time.This paradigm shift allows us to learn and combine the effect of a prior, the reconstruction fidelityand an inference method without the need to explicitly define the functional form of all components.The whole procedure is simply interpreted as a single RNN. As a result, there is no need for sparsityassumptions, the introduction of model constraints to allow for convexity, or even for double-loopalgorithms (Gregor & LeCun, 2010). In fact the proposed framework allows for use of current deeplearning approaches which have high expressive power without trading off scalability. It furtherallows us to move all the manual parameter tuning - which is still common in traditional approaches(Zoran & Weiss, 2011) - away from the inference phase and into the learning phase. We believe thisframework can be an important asset to introduce deep learning into the domain of inverse problems.2Under review as a conference paper at ICLR 2017Figure 1: (A)Graphical illustration of the recurrent structure of MAP estimation (compare equation(6)). The three boxes represent likelihood model p(yjx)(Aomitted), prior p(x), and updatefunction , respectively. In each iteration, likelihood and prior collect the current estimate of x,to send a gradient to update function (see text). then produces a new estimate of x. Typically,priorp(x)and update function are modeled as two distinct model components. Here they areboth depicted in gray boxes because they each represent model internal information which we wishto be transferable between different observations, i.e. they are observation independent. Likelihoodtermp(yjx)is depicted in blue to emphasize it as a model extrinsic term, some aspects of thelikelihood term can change from one observation to the other (such as matrix A). The likelihoodterm is observation-dependent. (B)Model simplification. The central insight of this work is to mergepriorp(x)and update function into one model with trainable parameters . The model theniteratively produces new estimates through feedback from likelihood model p(yjx)and previousupdates. (C)A Recurrent Inference Machine unrolled in time. Here we have added an additionalstate variable which represents information that is carried over time, but is not directly subjectedto constraints through the likelihood term p(yjx). During training, estimates at each time step aresubject to an error signal from the ground truth signal x(dashed two-sided arrows) in order toperform backpropagation. The intermittent error signal will force the model to perform well as soonas possible during iterations. At test time, there is no error signal from x.2 R ECURRENT INFERENCE MACHINESThe goal of this work is to find an inverse model as described in equation (5). Often, however, itwill be intractable to find (5) directly, even with modern non-linear function approximators. Forhigh-dimensional yandx, which are typically considered in inverse problems, it will simply notbe possible to fit matrix Ainto memory explicitly, but instead matrix Awill be replaced by anoperator that acts on x. An example is the Discrete Fourier Transform (DFT). Instead of using aFourier matrix which is quadratic in the size of x, DFTs are typically performed using the FastFourier Transform (FFT) algorithm which reduces computational cost and memory consumptionsignificantly. The use of operators, however, does not allow us to feed Ainto (5) anymore, butinstead we will have to resort to an iterative approach that alternates between updates of xand3Under review as a conference paper at ICLR 2017evaluation of Ax. This is precisely what is typically done in gradient-based inference methods, andwe will motivate our framework from there.2.1 G RADIENT -BASED INFERENCERecall from equation (4) that inverse problems can be interpreted in terms of probability such thatoptimization is an iterative approach to MAP inference. In its most simple form each consecutiveestimate of xis then computed through a recursive function of the formxt+1=xt+trlogp(yjA;x) + logp(x)(xt) (6)where we make use of the fact that p(xjA;y)/p(yjA;x)p(x)andtis the step size or learningrate at iteration t. Further, Ais a (partially-)observable covariate, p(yjA;x)is the likelihood func-tion for a given inference problem, and p(x)is a prior over signal x. In many cases where eitherthe likelihood term or the prior term deviate from standard models, optimization will not be convex.In constrast, the approach presented in this work is completely freed from ideas about convexity, aswill be shown in the next section.2.2 R ECURRENT FUNCTION DEFINITIONThe central insight of this work is that update equation (6) can be generalized such thatxt+1=xt+g(ryjx;xt) (7)where we denoterlogp(yjA;x)(xt)byryjxfor readability and is a set of learnable parametersthat govern the updates of x. In this representation, prior parameters and learning rate parametershave been merged into one set of trainable parameters .To recover the original update equation (6), g(ryjx;xt)is written asg(ryjx;xt) =tryjx+rx(8)where we make use of rxto denoterlogp(x)(xt). It will be useful to dissect the terms on theright-hand side of (8) to make sense of the usefulness of the modification.First notice, that in equation (6) we never explicitly evaluate the prior, but only evaluate its gradientin order to perform updates. If never used, learning a prior appears to be unnecessary, and insteadit appears more reasonable to directly learn a gradient function rx=f(xt)2Rd. The advantageof working solely with gradients is that they do no require the evaluation of an (often) intractablenormalization constant of p(x).A second observation is that the step sizes tare usually subject to either a chosen schedule orchosen through a deterministic algorithm such as a line search. That means the step sizes are alwayschosen according to a predefined model . Interestingly, this model is usually not learned. In orderto make inference faster and improve performance we suggest to learn the model as well.In (7) we have made the prior p(x)and the the step size model implicit in function g(ryjx;t).We explicitly keep ryjxas an input to (7) because - as opposed to andp(x)- it representsextrinsic information that is injected into the model. It allows for changes in the likelihood modelp(yjx)without the need to retrain parameters of the inference model g. Figure 1 gives a visualsummary of the insights from this section.2.3 O UTPUT CONSTRAINTSIn many problem domains the range of values for variable xis naturally constraint. For example,images typically have pixels with strictly positive values. In order to model this constraint we makeuse of nonlinear link functions as they are typically used in neural networks, such thatx= ( ) (9)where ()is any differentiable link function and is the space in which RIMs iterate such thatupdate equation (7) is replaced byt+1=t+g(ryj;t) (10)As a result xcan be constraint to a certain range of values through (), whereas iterations areperformed in the unconstrained space of 4Under review as a conference paper at ICLR 20172.4 R ECURRENT NETWORKSA useful extension of (7) is to introduce a latent state variable stinto the procedure. This latentvariable is typically used as a utility in recurrent neural networks to learn temporal dependencies indata processing. With an additional latent variable the update equations becomet+1=t+hryj;t;st+1(11)st+1=hryj;t;st(12)whereh()is the update model for state variable s. The variable swill allow the procedure to havememory in order to track progression, curvature, approximate a preconditioning matrix Tt(suchas in BFGS) and determine a stopping criterion among other things. The concept of a temporalmemory is quite limited in classical inference methods, which will allow RIMs to have a potentialadvantage over these methods.2.5 T RAININGIn order to learn a step-wise inference procedure it will be necessary to simulate the inference stepsduring training. I.e. during training, an RIM will perform a number of inference steps T. At eachstep the model will produce a prediction as depicted in figure Figure 1. Each of those predictions isthen subject to a loss, which encourages the model to produce predictions that improve over time. Init’s simplest form we can define a loss which is simply a weighted sum of the individual predictionlosses at each time step such thatLtotal() =TXt=1wtL(xt();x) (13)is the total loss. Here, L()is a base loss function such as the mean square error, wtis a positivescalar and xt()is a prediction at time t. In this work we follow Andrychowicz et al. (2016) insettingwt= 1for all time steps.3 R ELATED WORKThe RIM framework can be seen as an auto-encoder framework in which only the decoder is trained,whereas the encoder is given by a known corruption process. In terms of the training procedure thismakes RIMs very similar to denoising auto-encoders (Vincent et al., 2008). Though initially withthe objective of regularization in mind, denoising auto-encoders have been shown to be effectivelyused as generative models (Vincent et al., 2010). The difference of RIMs to denoising auto-encodersand also more recently developed auto-encoders such as Kingma & Welling (2014); Rezende et al.(2014) is that RIMs enforce coupling between encoder and decoder both, during training and testtime. In it’s typical form, decoder and encoder of an auto-encoder are only coupled during trainingtime, while there is no information flow during test time (Kingma & Welling, 2014; Rezende et al.,2014; Vincent et al., 2008; 2010). An exception is the work from Gregor et al. (2016) which isconceptually strongly related to RIMs. There, an RNN model is used to generate static data bydrawing on a fixed canvas. An error signal is propagated throughout the generation process.There have been approaches in the past which aim to formulate a framework in which an inferenceprocedure is learned. One of the best known frameworks is LISTA (Gregor & LeCun, 2010) whichaims to learn a model that reconstructs sparse codes from data. LISTA models try to fit into theclassical framework of doing inference as described in 1, whereas RIMs are completely removedfrom assumptions about sparsity. A recent paper by Andrychowicz et al. (2016) aims to train RNNsas optimizers for non-convex optimization problems. Though introduced with a different intention,RIMs can be seen as a generalization of this approach, in which the model - in addition to thegradient information - is aware about the absolute position of a prediction in variable space(seeequation (7)).4 E XPERIMENTAL RESULTSWe evaluate our method on various kinds of image restoration tasks which can each be formulated interms of linear measurement problems as described in equation (1). We first analyze the properties5Under review as a conference paper at ICLR 2017of our proposed method on a set of restoration tasks from random projections. Later we compareour model on two well known image restoration tasks: image denoising and image super-resolution.4.1 M ODELSIf not specified otherwise we use the same RNN architecture for all experiments presented in thiswork. The chosen RNN consists of three convolutional hidden layers and a final convolutional outputlayer. All convolutional filters were chosen to be of size 3 x 3 pixels. The first hidden layer consistsof convolutions with stride 2 (64 features), subsequent batch normalization and a tanh nonlinearity.The second hidden layer represents the RNN part of the model. We chose a gated recurrent unit(GRU) (Chung et al., 2014) with 256 features. The third hidden layer is a transpose convolutionlayer with 64 features which aims to recover the original image dimensions of the signal, followedagain by a batch normalization layer and a tanh nonlinearity. All models have been trained on afixed number of iterations of 20 steps. All methods were implemented in Tensorflow1.4.2 D ATAAll experiments were run on the BSD-300 data set (Martin et al., 2001)2. For training we extractedpatches of size 32 x 32 pixels with stride 4 from the 200 training images available in the dataset. In total this amounts to a data set of about 400 thousand image patches with highly redundantinformation. All models were trained over only two epochs, i.e. each unique image patch was seenby a model only twice during training. Validation was performed on a held-out data set of 1000image patches.For testing we either used the whole test set of 100 images from BSDS-300 or we used only a subsetof 68 images which was introduced by Roth & Black (2005) and which is commonly used in theimage restoration community3.4.3 I MAGE RESTORATIONAll tasks addressed in this work assume a linear measurement problem of the form as described inequation (1) with additive (isotropic) Gaussian noise. In this case the gradient of the likelihood takesthe formryjx=12AT(yAx) (14)where2is the noise variance. For very small this gradient diverges. In order to make the gradientmore stable also for small we chose to rewrite it asryjx=12+AT(yAx) (15)where=softplus ()andis a trainable parameter. As a link function (see (9)) we chose thelogistic sigmoid nonlinearity4and we used the mean square error as training loss.4.4 M ULTI -TASK LEARNING WITH RANDOM PROJECTIONSTo analyze the properties of our proposed framework in terms of convergence and to test whether allcomponents of the model are useful, we first trained the model to reconstruct image patches fromnoisy random projections of grayscale image patches. We consider three types of random projectionmatrices: (1) Gaussian ensembles with elements drawn from a standard normal distribution, (2)binary ensembles with entries of values f1;1gdrawn from a Bernulli distribution with p= 0:5,and (3) Fourier ensembles with randomly sampled rows from a Fourier matrix (see Donoho (2006b)).We trained three models on these tasks: (1) a Recurrent Inference Machine (RIM) as described in 2,(2) a gradient-descent network (GDN) which does not use the current estimate as an input (compare1https://www.tensorflow.org2https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/3http://www.visinf.tu-darmstadt.de/vi research/code/foe.en.jsp4All training data was rescaled to be in the range [0;1]6Under review as a conference paper at ICLR 20175152535PNSR (dB), p = 0.1Gaussian RP Binary RP Fourier RP0 10 20 30 40 50Steps5152535PNSR (dB), p = 0.40 10 20 30 40 50Steps0 10 20 30 40 50StepsRIMGDNFFNFigure 2: Reconstruction performance over time on random projections. Shown are results of thethree reconstruction tasks from random projections (see text) on 5000 random patches from theBSD-300 test set. Value of p represent the the reduction in dimensionality through the random pro-jection. Noise standard deviation was chosen to be = 1. Solid lines correspond to the mean peaksignal-to-noise-ration (PSNR) over time, and shaded areas correspond to one standard deviationaround the mean. Vertical dashed lines mark the last time step that was used during training.Andrychowicz et al. (2016)), and (3) a feed-forward network (FFN) which uses the same inputs asthe RIM but where we replaced the GRU unit with a ReLu layer in order to remove state-dependence.Model (2) and (3) are simplifications of RIM in order to test the influence of each of the removedmodel components on prediction performance.Figure 2 shows the reconstruction performance of all three models on random projections. In alltasks the RIM clearly outperforms both other models, showing overall consistent convergence be-havior. The FFN performs well on easier tasks but starts to show degrading performance over timeon more difficult tasks. This suggests that the state information of RIM plays an important roleon the convergence behavior as well as overall performance. The GDN shows worst performanceamong all three models. For all tasks, the performance of GDN starts to degrade clearly after the 20time steps that were used during training. We hypothesize that the model is able to compensate someof the missing information about the current estimate of xthrough state variable sduring training,but the model is not able to transfer this ability to episodes with more iterations.These results suggests that both the current estimate as well as the recurrent state carry useful in-formation for performing inference. We will therefor only consider fully fledged RIMs from hereon.4.5 I MAGE DENOISINGAfter evaluating our model on 32 x 32 pixel image patches we wanted to see how reconstruc-tion performance generalizes to full sized images and to an out of domain problem. We choseto reuse the RIM that was trained on the random projections task to perform image denoising. Inthis section we will call this model RIM-3task. To test the hypothesis that inference should betrained task specific, we further trained a model RIM-denoise solely on the denoising task. Ta-ble 2 shows the denoising performance through the mean PSNR on the BSD-300 test set for bothmodels as compared to state-of-the-art methods in image denoising. The RIM-3task model showsvery competitive results with other methods on all noise levels. This exemplifies that the modelindeed has learned something reminiscent of a prior, as it was never directly trained on this task.The RIM-denoise model further improves upon the performance of RIM-3task and it outperformsmost other methods on all noise levels. This is to say that the same RIM was used to performdenoising on different noise levels, and this model does not require any hand tuning after training.7Under review as a conference paper at ICLR 2017(a) Ground truth (b) Noisy image, 14.88dB(c) EPLL, 25.68dB (d) RIM, 25.91dBFigure 3: Denoising performance on example image use in Zoran & Weiss (2011). = 50 . Noisyimage was 8-bit quantized before reconstruction.Method PSNRCBM3D 30:18RTF-5 30:57RIM (ours) 30:84(30:67)Table 1: Color denoising. Denoisingperformance on the 68 images for =25after 8-bit quantization. Resultsfor RTF-5 (Schmidt et al., 2016) andCBM3D (Dabov et al., 2007b) adoptedfrom Schmidt et al. (2016). In paren-thesis are results for the full 100 testimages.Table 2 shows denoising perfomance on image that havebeen 8-bit quantized after adding noise(see Schmidt et al.(2016)). In this case performance slightly deteriorates forboth models, though still making competitive with state-of-the-art methods. This effect could possibly be accom-modated through further training, or by adjusting the for-ward model. Figure 3 gives some qualitative results onthe denoising performance for one of the test images fromBSD-300 as compared to the method from Zoran & Weiss(2011). RIM is able to produce more naturalistic imageswith less visible artifacts. The state variable in our RIMmodel allows for a growing receptive field size over time,which could explain the good long range interactions thatthe model shows.Many denoising algorithms are solely tested on gray-scaleimages. Sometimes this is due to additional difficultiesthat multi-channel problems bring for some inference approaches. To show that it is straightforwardto apply RIMs to multi-channel problems we trained a model to denoise RGB images. The denoisingperformance can be seen in table 1. The model is able to exploit correlations across color channelswhich allows for an additional boost in reconstruction performance.4.6 I MAGE SUPER -RESOLUTIONWe further tested our approach on the well known image super-resolution task. We trained a singleRIM5on 36 x 36 pixel image patches from the BSD-300 training set to perform image super-5The architecture of this model was slightly simplified in comparison to the previous problems. Instead ofstrided convolutions, we chose a trous convolutions. This model is more flexible and used only about 500:000parameters. Previous experiments will be updated with the same model architecture.8Under review as a conference paper at ICLR 2017Not Quantized 8-bit Quantized 15 25 50 15 25 50KSVD 30:87 28 :28 25 :175x5 FoE 30:99 28 :40 25 :35 28 :22BM3D 31:08 28 :56(28:35) 25:62(25:45) 28 :31LSSC 31:27 28 :70 25 :72 28 :23EPLL 31:19 28 :68(28:47) 25:67(25:50)opt-MRF 31:18 28 :66 25 :70MLP 28:85(28:75) (25 :83)RTF-5 28:75 28 :74RIM-3task 31:19(30:98) 28:67(28:45) 25:78(25:59) 31:06(30:88) 28:41(28:24) 24:86(24:73)RIM-denoise 31:31(31:10) 28:91(28:72) 26:06(25:88) 31:25(31:05) 28:76(28:58) 25:27(25:14)Table 2: Denoising performance on gray-scale images from BSD-300 test set. Shown are meanPSNR values for different noise values. Number outside of parenthesis correspond to test perfor-mance on the 68 test images from Roth & Black (2005), and numbers in parenthesis correspondto performance on all 100 test images from BSD-300. 68 image performance for KSVD (Elad &Aharon, 2006), FoE (Roth & Black, 2005), BM3D (Dabov et al., 2007a), LSSC (Mairal et al., 2009),EPLL (Zoran & Weiss, 2011), and opt-MRF (Chen et al., 2013) adopted from Chen et al. (2013).Performances on 100 images adopted from Burger et al. (2013). 68 image performance on MLP(Burger et al., 2012), RTF-5 (Schmidt et al., 2016) and all quantized results adopted from Schmidtet al. (2016).(a) Original Image (b) Bicubic: 30:43=0:8326 (c) SRCNN: 31:34=0:8660(d) A+: 31:43=0:8676 (e) SelfExSR: 31:18=0:8656 (f) RIM: 31:59=0:8712Figure 4: Super-resolution example with factor 3. Comparison with the same methods as in table 3.Reported numbers are PSNR/SSIM. Best results in bold.resolution for factors 2, 3, and 46. We followed the same testing protocol as in Huang et al. (2015),and we used the test images that were retrieved from their website7. Table 3 shows a comparisonwith some state-of-the-art methods on super-resolution for the BSD-300 test set. Figure 4 shows aqualitative example of super-resolution performance. The other deep learning method in this com-parison, SRCNN Dong et al. (2014), is outperformed by RIM on all scales. Interestingly SRCNNwas trained for each scale independently whereas we only trained one RIM for all scales. The cho-sen RIM has only about 500:000parameters which amounts to about 2MB of disk space, whichmakes this architecture very attractive also for mobile computing.6We reimplemented MATLABs bicubic interpolation kernel in order to apply a forward model (sub-sampling) in TensorFlow which agrees with the forward model in Huang et al. (2015).7https://sites.google.com/site/jbhuang0604/publications/struct sr9Under review as a conference paper at ICLR 2017Metric Scale Bicubic SRCNN A+ SelfExSR RIM (Ours)PSNR2x 29:550:35 31:110:39 31:220:40 31:180:39 31:390:393x 27:200:33 28:200:36 28:300:37 28:300:37 28:510:374x 25:960:33 26:700:34 26:820:35 26:850:36 27:010:35SSIM2x 0:84250:0078 0:88350:0062 0:88620:0063 0:88550:0064 0:88850:00623x 0:73820:0114 0:77940:0102 0:78360:0104 0:78430:0104 0:78880:01014x 0:66720:0131 0:70180:0125 0:70890:0125 0:71080:0124 0:71560:0125Table 3: Image super-resolution performance on RGB images from BSD-300 test set. Mean andstandard deviation (of the mean) of Peak Signal-to-Noise Ratio (PSNR) and Structural SimilarityIndex (SSIM) Wan (2004). Standard deviation of the mean was estimated from 10:000boostrapsamples. Test protocol and images taken from Huang et al. (2015). Only the three best performingmethods from Huang et al. (2015) were chosen for comparison: SRCNN Dong et al. (2014), A+Timofte et al. (2015), SelfExSR Huang et al. (2015). Best mean values in bold.5 D ISCUSSIONIn this work, we introduce a general learning framework for solving inverse problems with deeplearning approaches. We establish this framework by abandoning the traditional separation betweenmodel and inference. Instead, we propose to learn both components jointly without the need todefine their explicit functional form. This paradigm shift enables us to bridge the gap between thefields of deep learning and inverse problems. We believe that this framework can have a majorimpact on many inverse problems, for example in medical imaging and radio astronomy. Althoughwe have focused on linear image reconstruction tasks in this work, the framework can be applied toinverse problems of all kinds, such as non-linear inverse problems.ACKNOWLEDGMENTSThe research was funded by the DOME project (Astron & IBM) and the Netherlands Organizationfor Scientific Research (NWO). The authors are greatful for helpful comments from Thomas Kipf,Mijung Park, Rajat Thomas, and Karen Ullrich.REFERENCESImage quality assessment: form error visibility to structural similarity. IEEE Transactions on ImageProcessing , 13(4):600–612, 2004.Michal Aharon, Michael Elad, and Alfred Bruckstein. K-SVD: An algorithm for designing over-complete dictionaries for sparse representation. IEEE Transactions on Signal Processing , 54(11):4311–4322, nov 2006.Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, TomSchaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. jun2016.Harold Christopher Burger, Christian Schuler, and Stefan Harmeling. Image denoising: Can plainneural networks compete with BM3D? In IEEE Conference on Computer Vision and PatternRecognition , pp. 2392–2399. IEEE, jun 2012.Harold Christopher Burger, Christian J. Schuler, and Stefan Harmeling. Learning how to combineinternal and external denoising methods. In Joachim Weickert, Matthias Hein, and Bernt Schiele(eds.), GCPR , volume 8142 of Lecture Notes in Computer Science , pp. 121–130. Springer, 2013.Emmanuel J. Cand `es, Justin K. Romberg, and Terence Tao. Stable signal recovery from incompleteand inaccurate measurements. Communications on Pure and Applied Mathematics , 59(8):1207–1223, aug 2006.10Under review as a conference paper at ICLR 2017Yunjin Chen, Thomas Pock, Ren ́e Ranftl, and Horst Bischof. Revisiting Loss-Specific Training ofFilter-Based MRFs for Image Restoration. In 35th German Conference on Pattern Recognition(GCPR) , pp. 271–281, 2013.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical Evaluation ofGated Recurrent Neural Networks on Sequence Modeling. dec 2014.K. Dabov, A. Foi, V . Katkovnik, and K. Egiazarian. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing , 16(8):2080–2095, aug2007a.Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color Image Denois-ing via Sparse 3D Collaborative Filtering with Grouping Constraint in Luminance-ChrominanceSpace. In 2007 IEEE International Conference on Image Processing , pp. I – 313–I – 316. IEEE,sep 2007b.Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutionalnetwork for image super-resolution. ECCV , pp. 184–199, 2014.David L. Donoho. For most large underdetermined systems of linear equations the minimal L1-normsolution is also the sparsest solution. Communications on Pure and Applied Mathematics , 59(6):797–829, jun 2006a.D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory , 52(4):1289–1306,apr 2006b.Michael Elad and Michal Aharon. Image Denoising Via Sparse and Redundant RepresentationsOver Learned Dictionaries. IEEE Transactions on Image Processing , 15(12):3736–3745, dec2006.M ́ario A. T. Figueiredo, Robert D. Nowak, and Stephen J. Wright. Gradient Projection for SparseReconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE Journalof Selected Topics in Signal Processing , 1(4):586–597, dec 2007.Karol Gregor and Yann LeCun. Learning Fast Approximations of Sparse Coding. In Proceedingsof the 27th International Conference on Machine Learning (ICML-10) , pp. 399–406, 2010.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. TowardsConceptual Compression. apr 2016.Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from trans-formed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , pp. 5197–5206. IEEE, jun 2015.Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In The 2nd InternationalConference on Learning Representations (ICLR) , 2014.Sanjiv Kumar, Jonas August, and Martial Hebert. Exploiting Inference for Approximate ParameterLearning in Discriminative Fields: An Empirical Study. In Proceedings of the 5th internationalconference on Energy Minimization Methods in Computer Vision and Pattern Recognition , pp.153–168. Springer-Verlag, 2005.Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-localsparse models for image restoration. In 2009 IEEE 12th International Conference on ComputerVision , pp. 2272–2279. IEEE, sep 2009.David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In Proc. 8th Int’l Conf. Computer Vision , volume 2, pp. 416–423, July 2001.Hannes Nickisch and Matthias W. Seeger. Convex variational Bayesian inference for large scalegeneralized linear models. In Proceedings of the 26th International Conference on MachineLearning , pp. 761–768, New York, New York, USA, jun 2009. ACM Press.11Under review as a conference paper at ICLR 2017D J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inferencein deep generative models. In Proceedings of The 31st International Conference on MachineLearning , pp. 1278–1286, 2014.Stefan Roth and Michael J. Black. Fields of experts: A framework for learning image priors. In Pro-ceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition ,volume 2, pp. 860–867. IEEE, 2005.Uwe Schmidt, Jeremy Jancsary, Sebastian Nowozin, Stefan Roth, and Carsten Rother. Cascades ofregression tree fields for image restoration. IEEE Transactions on Pattern Analysis and MachineIntelligence , 38(4):677–689, 2016.Radu Timofte, Vincent de Smet, and Luc van Gool. A+: Adjusted anchored neighborhood regressionfor fast super-resolution. In ACCV , volume 9006, pp. 111–126. 2015.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103, New York, New York, USA, 2008. ACM Press.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with aLocal Denoising Criterion. The Journal of Machine Learning Research , 11:3371–3408, 2010.MJ Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting.The Journal of Machine Learning Research , 2006.Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole imagerestoration. In 2011 International Conference on Computer Vision , pp. 479–486. IEEE, nov 2011.12
SkaFOzMVl
HkSOlP9lg
ICLR.cc/2017/conference/-/paper353/official/review
{"title": "Official Review", "rating": "4: Ok but not good enough - rejection", "review": "Unfortunately, even after reading the authors' response to my pre-review question, I feel this paper in its current form lacks sufficient novelty to be accepted to ICLR.\n\nFundamentally, the paper suggests that traditional iterative algorithms for specific class of problems (ill-posed image inverse problems) can be replaced by discriminatively trained recurrent networks. As R3 also notes, un-rolled networks for iterative inference aren't new: they've been used to replace CRF-type inference, and _also_ to solve image inverse problems (my refs [1-3]). Therefore, I'd argue that the fundamental idea proposed by the paper isn't new---it is just that the paper seeks to 'formalize' it as an approach for inverse problems (although, there is nothing specific about the analysis that ties it to inverse problems: the paper only shows that the RIM can express gradient descent over prior + likelihood objective).\n\nI also did not find the claims about benefits over prior approaches very compelling. The comment about parameter sharing works both ways---it is possible that untying the parameters leads to better performance over a fewer number of 'iterations', and given that the 'training set' is synthetically generated, learning a larger number of parameters doesn't seem to be an issue. Also, I'd argue that sharing the parameters is the 'obvious' approach, and the prior methods choose to not tie the parameters to get better accuracy.\n\nThe same holds for being able to handle different noise levels / scale sizes. A single model can always be trained to handle multiple forms of degradation---its just that its likely to do better when it's trained for specific degradation model/level. But more importantly, there is no evidence in the current set of experiments that shows that this is a property of the RIM architecture. (Moreover, this claim goes against one of the motivations of the paper of not training a single prior for different observation models ... but to train the entire inference architecture end-to-end).\n\nIt is possible that the proposed method does offer practical benefits beyond prior work---but these benefits don't come from the idea of simply unrolling iterations, which is not novel. I would strongly recommend that the authors consider a significant re-write of the paper---with a detailed discussion of prior work mentioned in the comments that highlights, with experiments, the specific aspects of their recurrent architecture that enables better recovery for inverse problems. I would also suggest that to claim the mantle of 'solving inverse problems', the paper consider a broader set of inverse tasks---in-painting, deconvolution, different noise models, and possibly working with multiple observations (like for HDR).", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Recurrent Inference Machines for Solving Inverse Problems
["Patrick Putzky", "Max Welling"]
Inverse problems are typically solved by first defining a model and then choosing an inference procedure. With this separation of modeling from inference, inverse problems can be framed in a modular way. For example, variational inference can be applied to a broad class of models. The modularity, however, typically goes away after model parameters have been trained under a chosen inference procedure. During training, model and inference often interact in a way that the model parameters will ultimately be adapted to the chosen inference procedure, posing the two components inseparable after training. But if model and inference become inseperable after training, why separate them in the first place? We propose a novel learning framework which abandons the dichotomy between model and inference. Instead, we introduce Recurrent Inference Machines (RIM), a class of recurrent neural networks (RNN) that directly learn to solve inverse problems. We demonstrate the effectiveness of RIMs in experiments on various image reconstruction tasks. We show empirically that RIMs exhibit the desirable convergence behavior of classical inference procedures, and that they can outperform state-of- the-art methods when trained on specialized inference tasks. Our approach bridges the gap between inverse problems and deep learning, providing a framework for fast progression in the field of inverse problems.
["Optimization", "Deep learning", "Computer vision"]
https://openreview.net/forum?id=HkSOlP9lg
https://openreview.net/pdf?id=HkSOlP9lg
https://openreview.net/forum?id=HkSOlP9lg&noteId=SkaFOzMVl
Under review as a conference paper at ICLR 2017RECURRENT INFERENCE MACHINESFOR SOLVING INVERSE PROBLEMSPatrick Putzky & Max WellingInformatics InstituteUniversity of Amsterdamfpputzky,m.welling g@uva.nlABSTRACTInverse problems are typically solved by first defining a model and then choosingan inference procedure. With this separation of modeling from inference, inverseproblems can be framed in a modular way. For example, variational inferencecan be applied to a broad class of models. The modularity, however, typicallygoes away after model parameters have been trained under a chosen inferenceprocedure. During training, model and inference often interact in a way that themodel parameters will ultimately be adapted to the chosen inference procedure,posing the two components inseparable after training. But if model and inferencebecome inseperable after training, why separate them in the first place?We propose a novel learning framework which abandons the dichotomy betweenmodel and inference. Instead, we introduce Recurrent Inference Machines (RIM) ,a class of recurrent neural networks (RNN), that directly learn to solve inverseproblems.We demonstrate the effectiveness of RIMs in experiments on various image recon-struction tasks. We show empirically that RIMs exhibit the desirable convergencebehavior of classical inference procedures, and that they can outperform state-of-the-art methods when trained on specialized inference tasks.Our approach bridges the gap between inverse problems and deep learning, pro-viding a framework for fast progression in the field of inverse problems.1 I NTRODUCTIONInverse Problems are a broad class of problems which can be encountered in all scientific disciplines,from the natural sciences to engineering. The task in inverse problems is to reconstruct a signalfrom observations that are subject to a known (or inferred) corruption process known as the forwardmodel. A typical example of an inverse problem is the linear measurement problemy=Ax+n; (1)where xis the signal of interest, Ais anmdcorruption matrix, nis an additive noise vector,andyis the actual measurement. If Ais a wide matrix such that md, this problem is typicallyill-posed. Many signal reconstruction problems can be phrased in terms of the linear measurementproblem such as image denoising, super-resolution, deconvolution and so on. The general form ofAtypically defines the problem class. If Ais an identity matrix the problem is a denoising problem,while in tomography Arepresents a Fourier transform and a consecutive sub-sampling of the Fouriercoefficients.Inverse problems are often formulated as an optimization problem of the formminxd(y;Ax) +R(x); (2)whered(y;Ax)is the data fidelity term that enforces xto satisfy the observations y, andR(x)is aregularization term which restricts the solution to comply with a predefined model over x.The difficulties that arise in this framework are two-fold: (1) it is difficult to choose R(x)such thatit is an appropriate model for complex signals such as natural images, and (2) even under a wellchosenR(x)the optimization procedure might become difficult.1Under review as a conference paper at ICLR 2017Compressed sensing approaches give up on a versatile R(x)in order to define a convex optimizationprocedure. The idea is that the signal xhas a sparse representation in some basis such that x= uand that the optimization problem can be rephrased asminud(y;Au) +kuk1; (3)wherekk1is the sparsity inducing L1-norm (Donoho, 2006a). Under certain classes of d(y;Au)such as quadratic errors the optimization problem becomes convex. Results from the compressedsensing literature offer provable bounds on the reconstruction performance for sparse signals of thisform (Cand `es et al., 2006; Donoho, 2006b). The basis can also be learned from data (Aharonet al., 2006; Elad & Aharon, 2006).Other approaches interpret equation (2) in terms of probabilities such that finding the solution is amatter of performing maximum a posteriori (MAP) estimation (Figueiredo et al., 2007). In thosecasesd(y;Au)takes the form of a log-likelihood and R(x)takes the form of a parametric log-prior logp(x)over variable xsuch that the minimization becomes:maxxlogp(yjA;x) + logp(x): (4)This allows for more expressiveness of R(x)and for the possibility of learning the prior p(x)fromdata. However, with more expressive priors optimization will become more difficult as well. In fact,only for a few trivial prior-likelihood pairs will inference remain convex. In practice one often hasto resort to approximations of the objective and to approximate double-loop algorithms in order toallow for scalable inference (Nickisch & Seeger, 2009; Zoran & Weiss, 2011).In this work we take a radically different approach to solving inverse problems. We move awayfrom the idea that it is beneficial to separate learning a prior (regularizer) from the optimization todo the reconstruction. The usual thinking is that this separation allows for greater modularity andthe possibility to interchange one of these two complementary components in order to build newalgorithms. In practice however, we observe that the optimization procedure almost always has tobe adapted to the model choice to achieve good performance (Aharon et al., 2006; Elad & Aharon,2006; Nickisch & Seeger, 2009; Zoran & Weiss, 2011). In fact, it is well known that the optimizationprocedure used for training should match the one used during testing because the model has adapteditself to perform well under that optimization procedure (Kumar et al., 2005; Wainwright, 2006).What we need is a single framework which allows us to backpropagate through the optimizationprocedure when we learn the free parameters. Hence, We propose to look at inverse problems as adirect mapping from observations to estimated signal,^x=f(A;y) (5)where ^xis an estimate of signal xfrom observations (A;y). Here we define as a set of learnableparameters which define the inference algorithm as well as constraints on x. The goal is thus todefine map whose parameters are directly optimized for solving the inverse problem itself. It has thebenefits of both having high expressive power (if the map fis complex enough) as well as beingfast at inference time.This paradigm shift allows us to learn and combine the effect of a prior, the reconstruction fidelityand an inference method without the need to explicitly define the functional form of all components.The whole procedure is simply interpreted as a single RNN. As a result, there is no need for sparsityassumptions, the introduction of model constraints to allow for convexity, or even for double-loopalgorithms (Gregor & LeCun, 2010). In fact the proposed framework allows for use of current deeplearning approaches which have high expressive power without trading off scalability. It furtherallows us to move all the manual parameter tuning - which is still common in traditional approaches(Zoran & Weiss, 2011) - away from the inference phase and into the learning phase. We believe thisframework can be an important asset to introduce deep learning into the domain of inverse problems.2Under review as a conference paper at ICLR 2017Figure 1: (A)Graphical illustration of the recurrent structure of MAP estimation (compare equation(6)). The three boxes represent likelihood model p(yjx)(Aomitted), prior p(x), and updatefunction , respectively. In each iteration, likelihood and prior collect the current estimate of x,to send a gradient to update function (see text). then produces a new estimate of x. Typically,priorp(x)and update function are modeled as two distinct model components. Here they areboth depicted in gray boxes because they each represent model internal information which we wishto be transferable between different observations, i.e. they are observation independent. Likelihoodtermp(yjx)is depicted in blue to emphasize it as a model extrinsic term, some aspects of thelikelihood term can change from one observation to the other (such as matrix A). The likelihoodterm is observation-dependent. (B)Model simplification. The central insight of this work is to mergepriorp(x)and update function into one model with trainable parameters . The model theniteratively produces new estimates through feedback from likelihood model p(yjx)and previousupdates. (C)A Recurrent Inference Machine unrolled in time. Here we have added an additionalstate variable which represents information that is carried over time, but is not directly subjectedto constraints through the likelihood term p(yjx). During training, estimates at each time step aresubject to an error signal from the ground truth signal x(dashed two-sided arrows) in order toperform backpropagation. The intermittent error signal will force the model to perform well as soonas possible during iterations. At test time, there is no error signal from x.2 R ECURRENT INFERENCE MACHINESThe goal of this work is to find an inverse model as described in equation (5). Often, however, itwill be intractable to find (5) directly, even with modern non-linear function approximators. Forhigh-dimensional yandx, which are typically considered in inverse problems, it will simply notbe possible to fit matrix Ainto memory explicitly, but instead matrix Awill be replaced by anoperator that acts on x. An example is the Discrete Fourier Transform (DFT). Instead of using aFourier matrix which is quadratic in the size of x, DFTs are typically performed using the FastFourier Transform (FFT) algorithm which reduces computational cost and memory consumptionsignificantly. The use of operators, however, does not allow us to feed Ainto (5) anymore, butinstead we will have to resort to an iterative approach that alternates between updates of xand3Under review as a conference paper at ICLR 2017evaluation of Ax. This is precisely what is typically done in gradient-based inference methods, andwe will motivate our framework from there.2.1 G RADIENT -BASED INFERENCERecall from equation (4) that inverse problems can be interpreted in terms of probability such thatoptimization is an iterative approach to MAP inference. In its most simple form each consecutiveestimate of xis then computed through a recursive function of the formxt+1=xt+trlogp(yjA;x) + logp(x)(xt) (6)where we make use of the fact that p(xjA;y)/p(yjA;x)p(x)andtis the step size or learningrate at iteration t. Further, Ais a (partially-)observable covariate, p(yjA;x)is the likelihood func-tion for a given inference problem, and p(x)is a prior over signal x. In many cases where eitherthe likelihood term or the prior term deviate from standard models, optimization will not be convex.In constrast, the approach presented in this work is completely freed from ideas about convexity, aswill be shown in the next section.2.2 R ECURRENT FUNCTION DEFINITIONThe central insight of this work is that update equation (6) can be generalized such thatxt+1=xt+g(ryjx;xt) (7)where we denoterlogp(yjA;x)(xt)byryjxfor readability and is a set of learnable parametersthat govern the updates of x. In this representation, prior parameters and learning rate parametershave been merged into one set of trainable parameters .To recover the original update equation (6), g(ryjx;xt)is written asg(ryjx;xt) =tryjx+rx(8)where we make use of rxto denoterlogp(x)(xt). It will be useful to dissect the terms on theright-hand side of (8) to make sense of the usefulness of the modification.First notice, that in equation (6) we never explicitly evaluate the prior, but only evaluate its gradientin order to perform updates. If never used, learning a prior appears to be unnecessary, and insteadit appears more reasonable to directly learn a gradient function rx=f(xt)2Rd. The advantageof working solely with gradients is that they do no require the evaluation of an (often) intractablenormalization constant of p(x).A second observation is that the step sizes tare usually subject to either a chosen schedule orchosen through a deterministic algorithm such as a line search. That means the step sizes are alwayschosen according to a predefined model . Interestingly, this model is usually not learned. In orderto make inference faster and improve performance we suggest to learn the model as well.In (7) we have made the prior p(x)and the the step size model implicit in function g(ryjx;t).We explicitly keep ryjxas an input to (7) because - as opposed to andp(x)- it representsextrinsic information that is injected into the model. It allows for changes in the likelihood modelp(yjx)without the need to retrain parameters of the inference model g. Figure 1 gives a visualsummary of the insights from this section.2.3 O UTPUT CONSTRAINTSIn many problem domains the range of values for variable xis naturally constraint. For example,images typically have pixels with strictly positive values. In order to model this constraint we makeuse of nonlinear link functions as they are typically used in neural networks, such thatx= ( ) (9)where ()is any differentiable link function and is the space in which RIMs iterate such thatupdate equation (7) is replaced byt+1=t+g(ryj;t) (10)As a result xcan be constraint to a certain range of values through (), whereas iterations areperformed in the unconstrained space of 4Under review as a conference paper at ICLR 20172.4 R ECURRENT NETWORKSA useful extension of (7) is to introduce a latent state variable stinto the procedure. This latentvariable is typically used as a utility in recurrent neural networks to learn temporal dependencies indata processing. With an additional latent variable the update equations becomet+1=t+hryj;t;st+1(11)st+1=hryj;t;st(12)whereh()is the update model for state variable s. The variable swill allow the procedure to havememory in order to track progression, curvature, approximate a preconditioning matrix Tt(suchas in BFGS) and determine a stopping criterion among other things. The concept of a temporalmemory is quite limited in classical inference methods, which will allow RIMs to have a potentialadvantage over these methods.2.5 T RAININGIn order to learn a step-wise inference procedure it will be necessary to simulate the inference stepsduring training. I.e. during training, an RIM will perform a number of inference steps T. At eachstep the model will produce a prediction as depicted in figure Figure 1. Each of those predictions isthen subject to a loss, which encourages the model to produce predictions that improve over time. Init’s simplest form we can define a loss which is simply a weighted sum of the individual predictionlosses at each time step such thatLtotal() =TXt=1wtL(xt();x) (13)is the total loss. Here, L()is a base loss function such as the mean square error, wtis a positivescalar and xt()is a prediction at time t. In this work we follow Andrychowicz et al. (2016) insettingwt= 1for all time steps.3 R ELATED WORKThe RIM framework can be seen as an auto-encoder framework in which only the decoder is trained,whereas the encoder is given by a known corruption process. In terms of the training procedure thismakes RIMs very similar to denoising auto-encoders (Vincent et al., 2008). Though initially withthe objective of regularization in mind, denoising auto-encoders have been shown to be effectivelyused as generative models (Vincent et al., 2010). The difference of RIMs to denoising auto-encodersand also more recently developed auto-encoders such as Kingma & Welling (2014); Rezende et al.(2014) is that RIMs enforce coupling between encoder and decoder both, during training and testtime. In it’s typical form, decoder and encoder of an auto-encoder are only coupled during trainingtime, while there is no information flow during test time (Kingma & Welling, 2014; Rezende et al.,2014; Vincent et al., 2008; 2010). An exception is the work from Gregor et al. (2016) which isconceptually strongly related to RIMs. There, an RNN model is used to generate static data bydrawing on a fixed canvas. An error signal is propagated throughout the generation process.There have been approaches in the past which aim to formulate a framework in which an inferenceprocedure is learned. One of the best known frameworks is LISTA (Gregor & LeCun, 2010) whichaims to learn a model that reconstructs sparse codes from data. LISTA models try to fit into theclassical framework of doing inference as described in 1, whereas RIMs are completely removedfrom assumptions about sparsity. A recent paper by Andrychowicz et al. (2016) aims to train RNNsas optimizers for non-convex optimization problems. Though introduced with a different intention,RIMs can be seen as a generalization of this approach, in which the model - in addition to thegradient information - is aware about the absolute position of a prediction in variable space(seeequation (7)).4 E XPERIMENTAL RESULTSWe evaluate our method on various kinds of image restoration tasks which can each be formulated interms of linear measurement problems as described in equation (1). We first analyze the properties5Under review as a conference paper at ICLR 2017of our proposed method on a set of restoration tasks from random projections. Later we compareour model on two well known image restoration tasks: image denoising and image super-resolution.4.1 M ODELSIf not specified otherwise we use the same RNN architecture for all experiments presented in thiswork. The chosen RNN consists of three convolutional hidden layers and a final convolutional outputlayer. All convolutional filters were chosen to be of size 3 x 3 pixels. The first hidden layer consistsof convolutions with stride 2 (64 features), subsequent batch normalization and a tanh nonlinearity.The second hidden layer represents the RNN part of the model. We chose a gated recurrent unit(GRU) (Chung et al., 2014) with 256 features. The third hidden layer is a transpose convolutionlayer with 64 features which aims to recover the original image dimensions of the signal, followedagain by a batch normalization layer and a tanh nonlinearity. All models have been trained on afixed number of iterations of 20 steps. All methods were implemented in Tensorflow1.4.2 D ATAAll experiments were run on the BSD-300 data set (Martin et al., 2001)2. For training we extractedpatches of size 32 x 32 pixels with stride 4 from the 200 training images available in the dataset. In total this amounts to a data set of about 400 thousand image patches with highly redundantinformation. All models were trained over only two epochs, i.e. each unique image patch was seenby a model only twice during training. Validation was performed on a held-out data set of 1000image patches.For testing we either used the whole test set of 100 images from BSDS-300 or we used only a subsetof 68 images which was introduced by Roth & Black (2005) and which is commonly used in theimage restoration community3.4.3 I MAGE RESTORATIONAll tasks addressed in this work assume a linear measurement problem of the form as described inequation (1) with additive (isotropic) Gaussian noise. In this case the gradient of the likelihood takesthe formryjx=12AT(yAx) (14)where2is the noise variance. For very small this gradient diverges. In order to make the gradientmore stable also for small we chose to rewrite it asryjx=12+AT(yAx) (15)where=softplus ()andis a trainable parameter. As a link function (see (9)) we chose thelogistic sigmoid nonlinearity4and we used the mean square error as training loss.4.4 M ULTI -TASK LEARNING WITH RANDOM PROJECTIONSTo analyze the properties of our proposed framework in terms of convergence and to test whether allcomponents of the model are useful, we first trained the model to reconstruct image patches fromnoisy random projections of grayscale image patches. We consider three types of random projectionmatrices: (1) Gaussian ensembles with elements drawn from a standard normal distribution, (2)binary ensembles with entries of values f1;1gdrawn from a Bernulli distribution with p= 0:5,and (3) Fourier ensembles with randomly sampled rows from a Fourier matrix (see Donoho (2006b)).We trained three models on these tasks: (1) a Recurrent Inference Machine (RIM) as described in 2,(2) a gradient-descent network (GDN) which does not use the current estimate as an input (compare1https://www.tensorflow.org2https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/3http://www.visinf.tu-darmstadt.de/vi research/code/foe.en.jsp4All training data was rescaled to be in the range [0;1]6Under review as a conference paper at ICLR 20175152535PNSR (dB), p = 0.1Gaussian RP Binary RP Fourier RP0 10 20 30 40 50Steps5152535PNSR (dB), p = 0.40 10 20 30 40 50Steps0 10 20 30 40 50StepsRIMGDNFFNFigure 2: Reconstruction performance over time on random projections. Shown are results of thethree reconstruction tasks from random projections (see text) on 5000 random patches from theBSD-300 test set. Value of p represent the the reduction in dimensionality through the random pro-jection. Noise standard deviation was chosen to be = 1. Solid lines correspond to the mean peaksignal-to-noise-ration (PSNR) over time, and shaded areas correspond to one standard deviationaround the mean. Vertical dashed lines mark the last time step that was used during training.Andrychowicz et al. (2016)), and (3) a feed-forward network (FFN) which uses the same inputs asthe RIM but where we replaced the GRU unit with a ReLu layer in order to remove state-dependence.Model (2) and (3) are simplifications of RIM in order to test the influence of each of the removedmodel components on prediction performance.Figure 2 shows the reconstruction performance of all three models on random projections. In alltasks the RIM clearly outperforms both other models, showing overall consistent convergence be-havior. The FFN performs well on easier tasks but starts to show degrading performance over timeon more difficult tasks. This suggests that the state information of RIM plays an important roleon the convergence behavior as well as overall performance. The GDN shows worst performanceamong all three models. For all tasks, the performance of GDN starts to degrade clearly after the 20time steps that were used during training. We hypothesize that the model is able to compensate someof the missing information about the current estimate of xthrough state variable sduring training,but the model is not able to transfer this ability to episodes with more iterations.These results suggests that both the current estimate as well as the recurrent state carry useful in-formation for performing inference. We will therefor only consider fully fledged RIMs from hereon.4.5 I MAGE DENOISINGAfter evaluating our model on 32 x 32 pixel image patches we wanted to see how reconstruc-tion performance generalizes to full sized images and to an out of domain problem. We choseto reuse the RIM that was trained on the random projections task to perform image denoising. Inthis section we will call this model RIM-3task. To test the hypothesis that inference should betrained task specific, we further trained a model RIM-denoise solely on the denoising task. Ta-ble 2 shows the denoising performance through the mean PSNR on the BSD-300 test set for bothmodels as compared to state-of-the-art methods in image denoising. The RIM-3task model showsvery competitive results with other methods on all noise levels. This exemplifies that the modelindeed has learned something reminiscent of a prior, as it was never directly trained on this task.The RIM-denoise model further improves upon the performance of RIM-3task and it outperformsmost other methods on all noise levels. This is to say that the same RIM was used to performdenoising on different noise levels, and this model does not require any hand tuning after training.7Under review as a conference paper at ICLR 2017(a) Ground truth (b) Noisy image, 14.88dB(c) EPLL, 25.68dB (d) RIM, 25.91dBFigure 3: Denoising performance on example image use in Zoran & Weiss (2011). = 50 . Noisyimage was 8-bit quantized before reconstruction.Method PSNRCBM3D 30:18RTF-5 30:57RIM (ours) 30:84(30:67)Table 1: Color denoising. Denoisingperformance on the 68 images for =25after 8-bit quantization. Resultsfor RTF-5 (Schmidt et al., 2016) andCBM3D (Dabov et al., 2007b) adoptedfrom Schmidt et al. (2016). In paren-thesis are results for the full 100 testimages.Table 2 shows denoising perfomance on image that havebeen 8-bit quantized after adding noise(see Schmidt et al.(2016)). In this case performance slightly deteriorates forboth models, though still making competitive with state-of-the-art methods. This effect could possibly be accom-modated through further training, or by adjusting the for-ward model. Figure 3 gives some qualitative results onthe denoising performance for one of the test images fromBSD-300 as compared to the method from Zoran & Weiss(2011). RIM is able to produce more naturalistic imageswith less visible artifacts. The state variable in our RIMmodel allows for a growing receptive field size over time,which could explain the good long range interactions thatthe model shows.Many denoising algorithms are solely tested on gray-scaleimages. Sometimes this is due to additional difficultiesthat multi-channel problems bring for some inference approaches. To show that it is straightforwardto apply RIMs to multi-channel problems we trained a model to denoise RGB images. The denoisingperformance can be seen in table 1. The model is able to exploit correlations across color channelswhich allows for an additional boost in reconstruction performance.4.6 I MAGE SUPER -RESOLUTIONWe further tested our approach on the well known image super-resolution task. We trained a singleRIM5on 36 x 36 pixel image patches from the BSD-300 training set to perform image super-5The architecture of this model was slightly simplified in comparison to the previous problems. Instead ofstrided convolutions, we chose a trous convolutions. This model is more flexible and used only about 500:000parameters. Previous experiments will be updated with the same model architecture.8Under review as a conference paper at ICLR 2017Not Quantized 8-bit Quantized 15 25 50 15 25 50KSVD 30:87 28 :28 25 :175x5 FoE 30:99 28 :40 25 :35 28 :22BM3D 31:08 28 :56(28:35) 25:62(25:45) 28 :31LSSC 31:27 28 :70 25 :72 28 :23EPLL 31:19 28 :68(28:47) 25:67(25:50)opt-MRF 31:18 28 :66 25 :70MLP 28:85(28:75) (25 :83)RTF-5 28:75 28 :74RIM-3task 31:19(30:98) 28:67(28:45) 25:78(25:59) 31:06(30:88) 28:41(28:24) 24:86(24:73)RIM-denoise 31:31(31:10) 28:91(28:72) 26:06(25:88) 31:25(31:05) 28:76(28:58) 25:27(25:14)Table 2: Denoising performance on gray-scale images from BSD-300 test set. Shown are meanPSNR values for different noise values. Number outside of parenthesis correspond to test perfor-mance on the 68 test images from Roth & Black (2005), and numbers in parenthesis correspondto performance on all 100 test images from BSD-300. 68 image performance for KSVD (Elad &Aharon, 2006), FoE (Roth & Black, 2005), BM3D (Dabov et al., 2007a), LSSC (Mairal et al., 2009),EPLL (Zoran & Weiss, 2011), and opt-MRF (Chen et al., 2013) adopted from Chen et al. (2013).Performances on 100 images adopted from Burger et al. (2013). 68 image performance on MLP(Burger et al., 2012), RTF-5 (Schmidt et al., 2016) and all quantized results adopted from Schmidtet al. (2016).(a) Original Image (b) Bicubic: 30:43=0:8326 (c) SRCNN: 31:34=0:8660(d) A+: 31:43=0:8676 (e) SelfExSR: 31:18=0:8656 (f) RIM: 31:59=0:8712Figure 4: Super-resolution example with factor 3. Comparison with the same methods as in table 3.Reported numbers are PSNR/SSIM. Best results in bold.resolution for factors 2, 3, and 46. We followed the same testing protocol as in Huang et al. (2015),and we used the test images that were retrieved from their website7. Table 3 shows a comparisonwith some state-of-the-art methods on super-resolution for the BSD-300 test set. Figure 4 shows aqualitative example of super-resolution performance. The other deep learning method in this com-parison, SRCNN Dong et al. (2014), is outperformed by RIM on all scales. Interestingly SRCNNwas trained for each scale independently whereas we only trained one RIM for all scales. The cho-sen RIM has only about 500:000parameters which amounts to about 2MB of disk space, whichmakes this architecture very attractive also for mobile computing.6We reimplemented MATLABs bicubic interpolation kernel in order to apply a forward model (sub-sampling) in TensorFlow which agrees with the forward model in Huang et al. (2015).7https://sites.google.com/site/jbhuang0604/publications/struct sr9Under review as a conference paper at ICLR 2017Metric Scale Bicubic SRCNN A+ SelfExSR RIM (Ours)PSNR2x 29:550:35 31:110:39 31:220:40 31:180:39 31:390:393x 27:200:33 28:200:36 28:300:37 28:300:37 28:510:374x 25:960:33 26:700:34 26:820:35 26:850:36 27:010:35SSIM2x 0:84250:0078 0:88350:0062 0:88620:0063 0:88550:0064 0:88850:00623x 0:73820:0114 0:77940:0102 0:78360:0104 0:78430:0104 0:78880:01014x 0:66720:0131 0:70180:0125 0:70890:0125 0:71080:0124 0:71560:0125Table 3: Image super-resolution performance on RGB images from BSD-300 test set. Mean andstandard deviation (of the mean) of Peak Signal-to-Noise Ratio (PSNR) and Structural SimilarityIndex (SSIM) Wan (2004). Standard deviation of the mean was estimated from 10:000boostrapsamples. Test protocol and images taken from Huang et al. (2015). Only the three best performingmethods from Huang et al. (2015) were chosen for comparison: SRCNN Dong et al. (2014), A+Timofte et al. (2015), SelfExSR Huang et al. (2015). Best mean values in bold.5 D ISCUSSIONIn this work, we introduce a general learning framework for solving inverse problems with deeplearning approaches. We establish this framework by abandoning the traditional separation betweenmodel and inference. Instead, we propose to learn both components jointly without the need todefine their explicit functional form. This paradigm shift enables us to bridge the gap between thefields of deep learning and inverse problems. We believe that this framework can have a majorimpact on many inverse problems, for example in medical imaging and radio astronomy. Althoughwe have focused on linear image reconstruction tasks in this work, the framework can be applied toinverse problems of all kinds, such as non-linear inverse problems.ACKNOWLEDGMENTSThe research was funded by the DOME project (Astron & IBM) and the Netherlands Organizationfor Scientific Research (NWO). The authors are greatful for helpful comments from Thomas Kipf,Mijung Park, Rajat Thomas, and Karen Ullrich.REFERENCESImage quality assessment: form error visibility to structural similarity. IEEE Transactions on ImageProcessing , 13(4):600–612, 2004.Michal Aharon, Michael Elad, and Alfred Bruckstein. K-SVD: An algorithm for designing over-complete dictionaries for sparse representation. IEEE Transactions on Signal Processing , 54(11):4311–4322, nov 2006.Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, TomSchaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. jun2016.Harold Christopher Burger, Christian Schuler, and Stefan Harmeling. Image denoising: Can plainneural networks compete with BM3D? In IEEE Conference on Computer Vision and PatternRecognition , pp. 2392–2399. IEEE, jun 2012.Harold Christopher Burger, Christian J. Schuler, and Stefan Harmeling. Learning how to combineinternal and external denoising methods. In Joachim Weickert, Matthias Hein, and Bernt Schiele(eds.), GCPR , volume 8142 of Lecture Notes in Computer Science , pp. 121–130. Springer, 2013.Emmanuel J. Cand `es, Justin K. Romberg, and Terence Tao. Stable signal recovery from incompleteand inaccurate measurements. Communications on Pure and Applied Mathematics , 59(8):1207–1223, aug 2006.10Under review as a conference paper at ICLR 2017Yunjin Chen, Thomas Pock, Ren ́e Ranftl, and Horst Bischof. Revisiting Loss-Specific Training ofFilter-Based MRFs for Image Restoration. In 35th German Conference on Pattern Recognition(GCPR) , pp. 271–281, 2013.Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical Evaluation ofGated Recurrent Neural Networks on Sequence Modeling. dec 2014.K. Dabov, A. Foi, V . Katkovnik, and K. Egiazarian. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Transactions on Image Processing , 16(8):2080–2095, aug2007a.Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color Image Denois-ing via Sparse 3D Collaborative Filtering with Grouping Constraint in Luminance-ChrominanceSpace. In 2007 IEEE International Conference on Image Processing , pp. I – 313–I – 316. IEEE,sep 2007b.Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutionalnetwork for image super-resolution. ECCV , pp. 184–199, 2014.David L. Donoho. For most large underdetermined systems of linear equations the minimal L1-normsolution is also the sparsest solution. Communications on Pure and Applied Mathematics , 59(6):797–829, jun 2006a.D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory , 52(4):1289–1306,apr 2006b.Michael Elad and Michal Aharon. Image Denoising Via Sparse and Redundant RepresentationsOver Learned Dictionaries. IEEE Transactions on Image Processing , 15(12):3736–3745, dec2006.M ́ario A. T. Figueiredo, Robert D. Nowak, and Stephen J. Wright. Gradient Projection for SparseReconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE Journalof Selected Topics in Signal Processing , 1(4):586–597, dec 2007.Karol Gregor and Yann LeCun. Learning Fast Approximations of Sparse Coding. In Proceedingsof the 27th International Conference on Machine Learning (ICML-10) , pp. 399–406, 2010.Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. TowardsConceptual Compression. apr 2016.Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from trans-formed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , pp. 5197–5206. IEEE, jun 2015.Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In The 2nd InternationalConference on Learning Representations (ICLR) , 2014.Sanjiv Kumar, Jonas August, and Martial Hebert. Exploiting Inference for Approximate ParameterLearning in Discriminative Fields: An Empirical Study. In Proceedings of the 5th internationalconference on Energy Minimization Methods in Computer Vision and Pattern Recognition , pp.153–168. Springer-Verlag, 2005.Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Non-localsparse models for image restoration. In 2009 IEEE 12th International Conference on ComputerVision , pp. 2272–2279. IEEE, sep 2009.David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmentednatural images and its application to evaluating segmentation algorithms and measuring ecologicalstatistics. In Proc. 8th Int’l Conf. Computer Vision , volume 2, pp. 416–423, July 2001.Hannes Nickisch and Matthias W. Seeger. Convex variational Bayesian inference for large scalegeneralized linear models. In Proceedings of the 26th International Conference on MachineLearning , pp. 761–768, New York, New York, USA, jun 2009. ACM Press.11Under review as a conference paper at ICLR 2017D J Rezende, S Mohamed, and D Wierstra. Stochastic backpropagation and approximate inferencein deep generative models. In Proceedings of The 31st International Conference on MachineLearning , pp. 1278–1286, 2014.Stefan Roth and Michael J. Black. Fields of experts: A framework for learning image priors. In Pro-ceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition ,volume 2, pp. 860–867. IEEE, 2005.Uwe Schmidt, Jeremy Jancsary, Sebastian Nowozin, Stefan Roth, and Carsten Rother. Cascades ofregression tree fields for image restoration. IEEE Transactions on Pattern Analysis and MachineIntelligence , 38(4):677–689, 2016.Radu Timofte, Vincent de Smet, and Luc van Gool. A+: Adjusted anchored neighborhood regressionfor fast super-resolution. In ACCV , volume 9006, pp. 111–126. 2015.Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting andcomposing robust features with denoising autoencoders. In Proceedings of the 25th internationalconference on Machine learning , pp. 1096–1103, New York, New York, USA, 2008. ACM Press.Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol.Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with aLocal Denoising Criterion. The Journal of Machine Learning Research , 11:3371–3408, 2010.MJ Wainwright. Estimating the wrong graphical model: Benefits in the computation-limited setting.The Journal of Machine Learning Research , 2006.Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole imagerestoration. In 2011 International Conference on Computer Vision , pp. 479–486. IEEE, nov 2011.12
r1BS2p-4e
SJMGPrcle
ICLR.cc/2017/conference/-/paper254/official/review
{"title": "Depth is supervise learning", "rating": "5: Marginally below acceptance threshold", "review": "I do like the demonstration that including learning of auxiliary tasks does not interfere with the RL tasks but even helps. This is also not so surprising with deep networks. The deep structure of the model allows the model to learn first a good representation of the world on which it can base its solutions for specific goals. While even early representations do of course depend on the task performance itself, it is clear that there are common first stages in sensory representations like the need for edge detection etc. Thus, training by additional tasks will at least increase the effective training size. It is of course unclear how to adjust for this to make a fair comparison, but the paper could have included some more insights such as the change in representation with and without auxiliary training. \n\nI still strongly disagree with the implied definition of supervised or even self-supervised learning. The definition of unsupervised is learning without external labels. It does not matter if this comes from a human or for example from an expensive machine that is used to train a network so that a task can be solved later without this expensive machine. I would call EM a self-supervised method where labels are predicted from the model itself and used to bootstrap parameter learning. In this case you are using externally supplied labels, which is clearly a supervised learning task!\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Navigate in Complex Environments
["Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andy Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell"]
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision.
["Deep learning", "Reinforcement Learning"]
https://openreview.net/forum?id=SJMGPrcle
https://openreview.net/pdf?id=SJMGPrcle
https://openreview.net/forum?id=SJMGPrcle&noteId=r1BS2p-4e
Published as a conference paper at ICLR 2017LEARNING TO NAVIGATEINCOMPLEX ENVIRONMENTSPiotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard,Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu,Dharshan Kumaran, Raia HadsellDeepMindLondon, UK{piotrmirowski, razp, fviola, soyer, aybd, abanino, mdenil, goroshin, sifre,korayk, dkumaran, raia} @google.comABSTRACTLearning to navigate in complex environments with dynamic elements is an impor-tant milestone in developing AI agents. In this work we formulate the navigationquestion as a reinforcement learning problem and show that data efficiency and taskperformance can be dramatically improved by relying on additional auxiliary tasksleveraging multimodal sensory inputs. In particular we consider jointly learningthe goal-driven reinforcement learning problem with auxiliary depth predictionand loop closure classification tasks. This approach can learn to navigate from rawsensory input in complicated 3D mazes, approaching human-level performanceeven under conditions where the goal location changes frequently. We providedetailed analysis of the agent behaviour1, its ability to localise, and its networkactivity dynamics, showing that the agent implicitly learns key navigation abilities.1 I NTRODUCTIONThe ability to navigate efficiently within an environment is fundamental to intelligent behavior.Whilst conventional robotics methods, such as Simultaneous Localisation and Mapping (SLAM),tackle navigation through an explicit focus on position inference and mapping (Dissanayake et al.,2001), here we follow recent work in deep reinforcement learning (Mnih et al., 2015; 2016) andpropose that navigational abilities could emerge as the by-product of an agent learning a policythat maximizes reward. One advantage of an intrinsic, end-to-end approach is that actions are notdivorced from representation, but rather learnt together, thus ensuring that task-relevant features arepresent in the representation. Learning to navigate from reinforcement learning in partially observableenvironments, however, poses several challenges.First, rewards are often sparsely distributed in the environment, where there may be only one goallocation. Second, environments often comprise dynamic elements, requiring the agent to use memoryat different timescales: rapid one-shot memory for the goal location, together with short term memorysubserving temporal integration of velocity signals and visual observations, and longer term memoryfor constant aspects of the environment (e.g. boundaries, cues).To improve statistical efficiency we bootstrap the reinforcement learning procedure by augmentingour loss with auxiliary tasks that provide denser training signals that support navigation-relevantrepresentation learning. We consider two additional losses: the first one involves reconstruction of alow-dimensional depth map at each time step by predicting one input modality (the depth channel)from others (the colour channels). This auxiliary task concerns the 3D geometry of the environment,and is aimed to encourage the learning of representations that aid obstacle avoidance and short-termtrajectory planning. The second task directly invokes loop closure from SLAM: the agent is trainedto predict if the current location has been previously visited within a local trajectory.Denotes equal contribution1A video illustrating the navigation agents is available at: https://youtu.be/lNoaTyMZsWI1Published as a conference paper at ICLR 2017Figure 1: Views from a small 510maze, a large 915maze and an I-maze, with corresponding maze layoutsand sample agent trajectories. The mazes, which will be made public, have different textures and visual cues aswell as exploration rewards and goals (shown right).To address the memory requirements of the task we rely on a stacked LSTM architecture (Graveset al., 2013; Pascanu et al., 2013). We evaluate our approach using five 3D maze environments anddemonstrate the accelerated learning and increased performance of the proposed agent architecture.These environments feature complex geometry, random start position and orientation, dynamic goallocations, and long episodes that require thousands of agent steps (see Figure 1). We also providedetailed analysis of the trained agent to show that critical navigation skills are acquired. This isimportant as neither position inference nor mapping are directly part of the loss; therefore, rawperformance on the goal finding task is not necessarily a good indication that these skills are acquired.In particular, we show that the proposed agent resolves ambiguous observations and quickly localizesitself in a complex maze, and that this localization capability is correlated with higher task reward.2 A PPROACHWe rely on a end-to-end learning framework that incorporates multiple objectives. Firstly it tries tomaximize cumulative reward using an actor-critic approach. Secondly it minimizes an auxiliary lossof inferring the depth map from the RGB observation. Finally, the agent is trained to detect loopclosures as an additional auxiliary task that encourages implicit velocity integration.The reinforcement learning problem is addressed with the Asynchronous Advantage Actor-Critic(A3C) algorithm (Mnih et al., 2016) that relies on learning both a policy (atjst;)and value functionV(st;V)given a state observation st. Both the policy and value function share all intermediaterepresentations, both being computed using a separate linear layer from the topmost layer of themodel. The agent setup closely follows the work of (Mnih et al., 2016) and we refer to this work forthe details (e.g. the use of a convolutional encoder followed by either an MLP or an LSTM, the useof action repetition, entropy regularization to prevent the policy saturation, etc.). These details canalso be found in the Appendix B.The baseline that we consider in this work is an A3C agent (Mnih et al., 2016) that receives only RGBinput from the environment, using either a recurrent or a purely feed-forward model (see Figure 2a,b).The encoder for the RGB input (used in all other considered architectures) is a 3 layer convolutionalnetwork. To support the navigation capability of our approach, we also rely on the Nav A3C agent(Figure 2c) which employs a two-layer stacked LSTM after the convolutional encoder. We expand theobservations of the agents to include agent-relative velocity, the action sampled from the stochasticpolicy and the immediate reward, from the previous time step. We opt to feed the velocity andpreviously selected action directly to the second recurrent layer, with the first layer only receiving thereward. We postulate that the first layer might be able to make associations between reward and visualobservations that are provided as context to the second layer from which the policy is computed.Thus, the observation stmay include an image xt2R3WH(whereWandHare the width and2Published as a conference paper at ICLR 2017xt rt-1 { vt, at-1}encρᬭxtencρᬭencρᬭLoop (L)Depth (D1 )a. FF A3C c. Nav A3C d. Nav A3C +D1D2Lxt rt-1 { vt, at-1}encρᬭxtb. LSTM A3C Depth (D2 )Figure 2: Different architectures: (a) is a convolutional encoder followed by a feedforward layer and policy ( )and value function outputs; (b) has an LSTM layer; (c) uses additional inputs (agent-relative velocity, reward,and action), as well as a stacked LSTM; and (d) has additional outputs to predict depth and loop closures.height of the image), the agent-relative lateral and rotational velocity vt2R6, the previous actionat12RNA, and the previous reward rt12R.Figure 2d shows the augmentation of the Nav A3C with the different possible auxiliary losses. Inparticular we consider predicting depth from the convolutional layer (we will refer to this choiceasD1), or from the top LSTM layer ( D2) or predicting loop closure ( L). The auxiliary losses arecomputed on the current frame via a single layer MLP. The agent is trained by applying a weightedsum of the gradients coming from A3C, the gradients from depth prediction (multiplied with d1;d2)and the gradients from the loop closure (scaled by l). More details of the online learning algorithmare given in Appendix B.2.1 D EPTH PREDICTIONThe primary input to the agent is in the form of RGB images. However, depth information, coveringthe central field of view of the agent, might supply valuable information about the 3D structure ofthe environment. While depth could be directly used as an input, we argue that if presented as anadditional loss it is actually more valuable to the learning process. In particular if the predictionloss shares representation with the policy, it could help build useful features for RL much faster,bootstrapping learning. Since we know from (Eigen et al., 2014) that a single frame can be enough topredict depth, we know this auxiliary task can be learnt. A comparison between having depth as inputversus as an additional loss is given in Appendix C, which shows significant gain for depth as a loss.Since the role of the auxiliary loss is just to build up the representation of the model, we do notnecessarily care about the specific performance obtained or nature of the prediction. We do careabout the data efficiency aspect of the problem and also computational complexity. If the loss is to beuseful for the main task, we should converge faster on it compared to solving the RL problem (usingless data samples), and the additional computational cost should be minimal. To achieve this we usea low resolution variant of the depth map, reducing the screen resolution to 4x16 pixels2.We explore two different variants for the loss. The first choice is to phrase it as a regression task, themost natural choice. While this formulation, combined with a higher depth resolution, extracts themost information, mean square error imposes a unimodal distribution (van den Oord et al., 2016).To address this possible issue, we also consider a classification loss, where depth at each positionis discretised into 8 different bands. The bands are non-uniformally distributed such that we paymore attention to far-away objects (details in Appendix B). The motivation for the classificationformulation is that while it greatly reduces the resolution of depth, it is more flexible from a learningperspective and can result in faster convergence (hence faster bootstrapping).2The image is cropped before being subsampled to lessen the floor and ceiling which have little relevantdepth information.3Published as a conference paper at ICLR 20172.2 L OOP CLOSURE PREDICTIONLoop closure, like depth, is valuable for a navigating agent, since can be used for efficient explorationand spatial reasoning. To produce the training targets, we detect loop closures based on the similarityof local position information during an episode, which is obtained by integrating 2D velocity overtime. Specifically, in a trajectory noted fp0;p1;:::;p Tg, whereptis the position of the agent at timet, we define a loop closure label ltthat is equal to 1 if the position ptof the agent is close to thepositionpt0at an earlier time t0. In order to avoid trivial loop closures on consecutive points of thetrajectory, we add an extra condition on an intermediary position pt00being far from pt. Thresholds 1and2provide these two limits. Learning to predict the binary loop label is done by minimizing theBernoulli lossLlbetweenltand the output of a single-layer output from the hidden representation htof the last hidden layer of the model, followed by a sigmoid activation.3 R ELATED WORKThere is a rich literature on navigation, primarily in the robotics literature. However, here we focus onrelated work in deep RL. Deep Q-networks (DQN) have had breakthroughs in extremely challengingdomains such as Atari (Mnih et al., 2015). Recent work has developed on-policy RL methods suchas advantage actor-critic that use asynchronous training of multiple agents in parallel (Mnih et al.,2016). Recurrent networks have also been successfully incorporated to enable state disambiguationin partially observable environments (Koutnik et al., 2013; Hausknecht & Stone, 2015; Mnih et al.,2016; Narasimhan et al., 2015).Deep RL has recently been used in the navigation domain. Kulkarni et al. (2016) used a feedforwardarchitecture to learn deep successor representations that enabled behavioral flexibility to rewardchanges in the MazeBase gridworld, and provided a means to detect bottlenecks in 3D VizDoom.Zhu et al. (2016) used a feedforward siamese actor-critic architecture incorporating a pretrainedResNet to support navigation to a target in a discretised 3D environment. Oh et al. (2016) investigatedthe performance of a variety of networks with external memory (Weston et al., 2014) on simplenavigation tasks in the Minecraft 3D block world environment. Tessler et al. (2016) also used theMinecraft domain to show the benefit of combining feedforward deep-Q networks with the learningof resuable skill modules (cf options: (Sutton et al., 1999)) to transfer between navigation tasks. Tai &Liu (2016) trained a convnet DQN-based agent using depth channel inputs for obstacle avoidance in3D environments. Barron et al. (2016) investigated how well a convnet can predict the depth channelfrom RGB in the Minecraft environment, but did not use depth for training the agent.Auxiliary tasks have often been used to facilitate representation learning (Suddarth & Kergosien,1990). Recently, the incorporation of additional objectives, designed to augment representationlearning through auxiliary reconstructive decoding pathways (Zhang et al., 2016; Rasmus et al., 2015;Zhao et al., 2015; Mirowski et al., 2010), has yielded benefits in large scale classification tasks. Indeep RL settings, however, only two previous papers have examined the benefit of auxiliary tasks.Specifically, Li et al. (2016) consider a supervised loss for fitting a recurrent model on the hiddenrepresentations to predict the next observed state, in the context of imitation learning of sequencesprovided by experts, and Lample & Chaplot (2016) show that the performance of a DQN agent in afirst-person shooter game in the VizDoom environment can be substantially enhanced by the additionof a supervised auxiliary task, whereby the convolutional network was trained on an enemy-detectiontask, with information about the presence of enemies, weapons, etc., provided by the game engine.In contrast, our contribution addresses fundamental questions of how to learn an intrinsic repre-sentation of space, geometry, and movement while simultaneously maximising rewards throughreinforcement learning. Our method is validated in challenging maze domains with random start andgoal locations.4 E XPERIMENTSWe consider a set of first-person 3D mazes from the DeepMind Lab environment (Beattie et al., 2016)(see Fig. 1) that are visually rich, with additional observations available to the agent such as inertial4Published as a conference paper at ICLR 2017(a)Static maze (small) (b)Static maze (large) (c)Random Goal I-maze(d)Random Goal maze (small) (e)Random Goal maze (large) (f)Random Goal maze (large): different formu-lation of depth predictionFigure 3: Rewards achieved by the agents on 5 different tasks: two static mazes (small and large) with fixedgoals, two static mazes with comparable layout but with dynamic goals and the I-maze. Results are averagedover the top 5 random hyperparameters for each agent-task configuration. Star in the label indicates the use ofreward clipping. Please see text for more details.information and local depth information.3The action space is discrete, yet allows finegrained control,comprising 8 actions: the agent can rotate in small increments, accelerate forward or backward orsideways, or induce rotational acceleration while moving. Reward is achieved in these environmentsby reaching a goal from a random start location and orientation. If the goal is reached, the agent isrespawned to a new start location and must return to the goal. The episode terminates when a fixedamount of time expires, affording the agent enough time to find the goal several times. There aresparse ‘fruit’ rewards which serve to encourage exploration. Apples are worth 1 point, strawberries 2points and goals are 10 points. Videos of the agent solving the maze are linked in Appendix A.In the static variant of the maze, the goal and fruit locations are fixed and only the agent’s startlocation changes. In the dynamic (Random Goal) variant, the goal and fruits are randomly placed onevery episode. Within an episode, the goal and apple locations stay fixed until the episode ends. Thisencourages an explore-exploit strategy, where the agent should initially explore the maze, then retainthe goal location and quickly refind it after each respawn. For both variants (static and random goal)we consider a small and large map. The small mazes are 510and episodes last for 3600 timesteps,and the large mazes are 915with 10800 steps (see Figure 1). The RGB observation is 8484.The I-Maze environment (see Figure 1, right) is inspired by the classic T-maze used to investigatenavigation in rodents (Olton et al., 1979): the layout remains fixed throughout, the agent spawns inthe central corridor where there are apple rewards and has to locate the goal which is placed in thealcove of one of the four arms. Because the goal is hidden in the alcove, the optimal agent behaviourmust rely on memory of the goal location in order to return to the goal using the most direct route.Goal location is constant within an episode but varies randomly across episodes.The different agent architectures described in Section 2 are evaluated by training on the five mazes.Figure 3 shows learning curves (averaged over the 5 top performing agents). The agents are afeedforward model (FF A3C), a recurrent model (LSTM A3C), the stacked LSTM version withvelocity, previous action and reward as input (Nav A3C), and Nav A3C with depth prediction fromthe convolution layer (Nav A3C+ D1), Nav A3C with depth prediction from the last LSTM layer(Nav A3C+D2), Nav A3C with loop closure prediction (Nav A3C+ L) as well as the Nav A3C with3The environments used in this paper are publicly available at https://github.com/deepmind/lab .5Published as a conference paper at ICLR 2017Figure 4: left: Example of depth predictions (pairs of ground truth and predicted depths), sampled every 40 steps.right: Example of loop closure prediction. The agent starts at the gray square and the trajectory is plotted ingray. Blue dots correspond to true positive outputs of the loop closure detector; red cross correspond to falsepositives and green cross to false negatives. Note the false positives that occur when the agent is actually a fewsquares away from actual loop closure.all auxiliary losses considered together (Nav A3C+ D1D2L). In each case we ran 64 experimentswith randomly sampled hyper-parameters (for ranges and details please see the appendix). The meanover the top 5 runs as well as the top 5 curves are plotted. Expert human scores, established by aprofessional game player, are compared to these results. The Nav A3C+ D2agents reach human-levelperformance on Static 1 and 2, and attain about 91% and 59% of human scores on Random Goal 1and 2.In Mnih et al. (2015) reward clipping is used to stabilize learning, technique which we employed inthis work as well. Unfortunately, for these particular tasks, this yields slightly suboptimal policiesbecause the agent does not distinguish apples (1 point) from goals (10 points). Removing the rewardclipping results in unstable behaviour for the base A3C agent (see Appendix C). However it seemsthat the auxiliary signal from depth prediction mediates this problem to some extent, resulting instable learning dynamics (e.g. Figure 3f, Nav A3C+ D1vs Nav A3C*+ D1). We clearly indicatewhether reward clipping is used by adding an asterisk to the agent name.Figure 3f also explores the difference between the two formulations of depth prediction, as a regressiontask or a classification task. We can see that the regression agent (Nav A3C*+ D1[MSE]) performsworse than one that does classification (Nav A3C*+ D1). This result extends to other maps, andwe therefore only use the classification formulation in all our other results4. Also we see thatpredicting depth from the last LSTM layer (hence providing structure to the recurrent layer, not justthe convolutional ones) performs better.We note some particular results from these learning curves. In Figure 3 (a and b), consider thefeedforward A3C model (red curve) versus the LSTM version (pink curve). Even though navigationseems to intrinsically require memory, as single observations could often be ambiguous, the feed-forward model achieves competitive performance on static mazes. This suggest that there might begood strategies that do not involve temporal memory and give good results, namely a reactive policyheld by the weights of the encoder, or learning a wall-following strategy. This motivates the dynamicenvironments that encourage the use of memory and more general navigation strategies.Figure 3 also shows the advantage of adding velocity, reward and action as an input, as well as theimpact of using a two layer LSTM (orange curve vs red and pink). Though this agent (Nav A3C)is better than the simple architectures, it is still relatively slow to train on all of the mazes. Webelieve that this is mainly due to the slower, data inefficient learning that is generally seen in pureRL approaches. Supporting this we see that adding the auxiliary prediction targets of depth andloop closure (Nav A3C+ D1D2L, black curve) speeds up learning dramatically on most of the mazes(see Table 1: AUC metric). It has the strongest effect on the static mazes because of the acceleratedlearning, but also gives a substantial and lasting performance increase on the random goal mazes.Although we place more value on the task performance than on the auxiliary losses, we report theresults from the loop closure prediction task. Over 100 test episodes of 2250 steps each, within alarge maze (random goal 2), the Nav A3C*+ D1Lagent demonstrated very successful loop detection,reaching an F-1 score of 0.83. A sample trajectory can be seen in Figure 4 (right).4An exception is the Nav A3C*+ D1Lagent on the I-maze (Figure 3c), which uses depth regression andreward clipping. While it does worse, we include it because some analysis is based on this agent.6Published as a conference paper at ICLR 2017Mean over top 5 agents Highest reward agentMaze Agent AUC Score % Human Goals Position Acc Latency 1:>1 ScoreI-Maze FF A3C* 75.5 98 - 94/100 42.2 9.3s:9.0s 102LSTM A3C* 112.4 244 - 100/100 87.8 15.3s:3.2s 203Nav A3C*+ D1L 169.7 266 - 100/100 68.5 10.7s:2.7s 252Nav A3C+ D2 203.5 268 - 100/100 62.3 8.8s:2.5s 269Nav A3C+ D1D2L 199.9 258 - 100/100 61.0 9.9s:2.5s 251Static 1 FF A3C* 41.3 79 83 100/100 64.3 8.8s:8.7s 84LSTM A3C* 44.3 98 103 100/100 88.6 6.1s:5.9s 110Nav A3C+ D2 104.3 119 125 100/100 95.4 5.9s:5.4s 122Nav A3C+ D1D2L 102.3 116 122 100/100 94.5 5.9s:5.4s 123Static 2 FF A3C* 35.8 81 47 100/100 55.6 24.2s:22.9s 111LSTM A3C* 46.0 153 91 100/100 80.4 15.5s:14.9s 155Nav A3C+ D2 157.6 200 116 100/100 94.0 10.9s:11.0s 202Nav A3C+ D1D2L 156.1 192 112 100/100 92.6 11.1s:12.0s 192Random Goal 1 FF A3C* 37.5 61 57.5 88/100 51.8 11.0:9.9s 64LSTM A3C* 46.6 65 61.3 85/100 51.1 11.1s:9.2s 66Nav A3C+ D2 71.1 96 91 100/100 85.5 14.0s:7.1s 91Nav A3C+ D1D2L 64.2 81 76 81/100 83.7 11.5s:7.2s 74.6Random Goal 2 FF A3C* 50.0 69 40.1 93/100 30.0 27.3s:28.2s 77LSTM A3C* 37.5 57 32.6 74/100 33.4 21.5s:29.7s 51.3Nav A3C*+ D1L 62.5 90 52.3 90/100 51.0 17.9s:18.4s 106Nav A3C+ D2 82.1 103 59 79/100 72.4 15.4s:15.0s 109Nav A3C+ D1D2L 78.5 91 53 74/100 81.5 15.9s:16.0s 102Table 1: Comparison of four agent architectures over five maze configurations, including random and staticgoals. AUC (Area under learning curve), Score , and % Human are averaged over the best 5 hyperparameters.Evaluation of a single best performing agent is done through analysis on 100 test episodes. Goals gives thenumber of episodes where the goal was reached one more more times. Position Accuracy is the classificationaccuracy of the position decoder. Latency 1:>1 is the average time to the first goal acquisition vs. the averagetime to all subsequent goal acquisitions. Score is the mean score over the 100 test episodes.5 A NALYSIS5.1 P OSITION DECODINGIn order to evaluate the internal representation of location within the agent (either in the hidden unitshtof the last LSTM, or, in the case of the FF A3C agent, in the features fton the last layer of theconv-net), we train a position decoder that takes that representation as input, consisting of a linearclassifier with multinomial probability distribution over the discretized maze locations. Small mazes(510) have 50 locations, large mazes ( 915) have 135 locations, and the I-maze has 77 locations.Note that we do not backpropagate the gradients from the position decoder through the rest of thenetwork. The position decoder can only see the representation exposed by the model, not change it.An example of position decoding by the Nav A3C+ D2agent is shown in Figure 6, where the initialuncertainty in position is improved to near perfect position prediction as more observations areacquired by the agent. We observe that position entropy spikes after a respawn, then decreases oncethe agent acquires certainty about its location. Additionally, videos of the agent’s position decodingare linked in Appendix A. In these complex mazes, where localization is important for the purpose ofreaching the goal, it seems that position accuracy and final score are correlated, as shown in Table1. A pure feed-forward architecture still achieves 64.3% accuracy in a static maze with static goal,suggesting that the encoder memorizes the position in the weights and that this small maze is solvableby all the agents, with sufficient training time. In Random Goal 1, it is Nav A3C+ D2that achievesthe best position decoding performance (85.5% accuracy), whereas the FF A3C and the LSTM A3Carchitectures are at approximately 50%.In the I-maze, the opposite branches of the maze are nearly identical, with the exception of verysparse visual cues. We observe that once the goal is first found, the Nav A3C*+ D1Lagent is capableof directly returning to the correct branch in order to achieve the maximal score. However, the linearposition decoder for this agent is only 68.5% accurate, whereas it is 87.8% in the plain LSTM A3Cagent. We hypothesize that the symmetry of the I-maze will induce a symmetric policy that need notbe sensitive to the exact position of the agent (see analysis below).7Published as a conference paper at ICLR 2017Figure 5: Trajectories of the Nav A3C*+ D1Lagent in the I-maze (left) and of the Nav A3C+ D2random goalmaze 1 (right) over the course of one episode. At the beginning of the episode (gray curve on the map), theagent explores the environment until it finds the goal at some unknown location (red box). During subsequentrespawns (blue path), the agent consistently returns to the goal. The value function, plotted for each episode,rises as the agent approaches the goal. Goals are plotted as vertical red lines.Figure 6: Trajectory of the Nav A3C+ D2agent in the random goal maze 1, overlaid with the position probabilitypredictions predicted by a decoder trained on LSTM hidden activations, taken at 4 steps during an episode.Initial uncertainty gives way to accurate position prediction as the agent navigates.A desired property of navigation agents in our Random Goal tasks is to be able to first find the goal,and reliably return to the goal via an efficient route after subsequent re-spawns. The latency columnin Table 1 shows that the Nav A3C+ D2agents achieve the lowest latency to goal once the goal hasbeen discovered (the first number shows the time in seconds to find the goal the first time, and thesecond number is the average time for subsequent finds). Figure 5 shows clearly how the agent findsthe goal, and directly returns to that goal for the rest of the episode. For Random Goal 2, none of theagents achieve lower latency after initial goal acquisition; this is presumably due to the larger, morechallenging environment.5.2 S TACKED LSTM GOAL ANALYSISFigure 7(a) shows shows the trajectories traversed by an agent for each of the four goal locations.After an initial exploratory phase to find the goal, the agent consistently returns to the goal location.We visualize the agent’s policy by applying tSNE dimension reduction (Maaten & Hinton, 2008)to the cellactivations at each step of the agent for each of the four goal locations. Whilst clusterscorresponding to each of the four goal locations are clearly distinct in the LSTM A3C agent, thereare 2 main clusters in the Nav A3C agent – with trajectories to diagonally opposite arms of the mazerepresented similarly. Given that the action sequence to opposite arms is equivalent (e.g. straight, turnleft twice for top left and bottom right goal locations), this suggests that the Nav A3C policy-dictatingLSTM maintains an efficient representation of 2 sub-policies (i.e. rather than 4 independent policies)– with critical information about the currently relevant goal provided by the additional LSTM.5.3 I NVESTIGATING DIFFERENT COMBINATIONS OF AUXILIARY TASKSOur results suggest that depth prediction from the policy LSTM yields optimal results. However,several other auxiliary tasks have been concurrently introduced in (Jaderberg et al., 2017), and thuswe provide a comparison of reward prediction against depth prediction. Following that paper, weimplemented two additional agent architectures, one performing reward prediction from the convnetusing a replay buffer, called Nav A3C*+ R, and one combining reward prediction from the convnetand depth prediction from the LSTM (Nav A3C+ RD 2). Table 2 suggests that reward prediction (NavA3C*+R) improves upon the plain stacked LSTM architecture (Nav A3C*) but not as much as depthprediction from the policy LSTM (Nav A3C+ D2). Combining reward prediction and depth prediction(Nav A3C+RD 2) yields comparable results to depth prediction alone (Nav A3C+ D2); normalisedaverage AUC values are respectively 0.995 vs. 0.981. Future work will explore other auxiliary tasks.8Published as a conference paper at ICLR 2017(a)Agent trajectories for episodes withdifferent goal locations(b)LSTM activations from A3C agent (c) LSTM activations from NavA3C*+ D1LagentFigure 7: LSTM cell activations of LSTM A3C and Nav A3C*+ D1Lagents from the I-Maze collected overmultiple episodes and reduced to 2 dimensions using tSNE, then coloured to represent the goal location.Policy-dictating LSTM of Nav A3C agent shown.Navigation agent architectureMaze Nav A3C* Nav A3C+ D1 Nav A3C+ D2 Nav A3C+ D1D2 Nav A3C*+ R Nav A3C+ RD2I-Maze 143.3 196.7 203.5 197.2 128.2 191.8Static 1 60.1 103.2 104.3 100.3 86.9 105.1Static 2 59.9 153.1 157.6 151.6 100.6 155.5Random Goal 1 45.5 57.6 71.1 63.2 54.4 72.3Random Goal 2 37.0 66.0 82.1 75.1 68.3 80.1Table 2: Comparison of five navigation agent architectures over five maze configurations with random andstatic goals, including agents performing reward prediction Nav A3C*+ Rand Nav A3C+ RD 2, where rewardprediction is implemented following (Jaderberg et al., 2017). We report the AUC (Area under learning curve),averaged over the best 5 hyperparameters.6 C ONCLUSIONWe proposed a deep RL method, augmented with memory and auxiliary learning targets, for trainingagents to navigate within large and visually rich environments that include frequently changingstart and goal locations. Our results and analysis highlight the utility of un/self-supervised auxiliaryobjectives, namely depth prediction and loop closure, in providing richer training signals that bootstraplearning and enhance data efficiency. Further, we examine the behavior of trained agents, their abilityto localise, and their network activity dynamics, in order to analyse their navigational abilities.Our approach of augmenting deep RL with auxiliary objectives allows end-end learning and mayencourage the development of more general navigation strategies. Notably, our work with auxiliarylosses is related to (Jaderberg et al., 2017) which independently looks at data efficiency whenexploiting auxiliary losses. One difference between the two works is that our auxiliary losses areonline (for the current frame) and do not rely on any form of replay. Also the explored losses are verydifferent in nature. Finally our focus is on the navigation domain and understanding if navigationemerges as a bi-product of solving an RL problem, while Jaderberg et al. (2017) is concerned withdata efficiency for any RL-task.Whilst our best performing agents are relatively successful at navigation, their abilities would bestretched if larger demands were placed on rapid memory (e.g. in procedurally generated mazes),due to the limited capacity of the stacked LSTM in this regard. It will be important in the future tocombine visually complex environments with architectures that make use of external memory (Graveset al., 2016; Weston et al., 2014; Olton et al., 1979) to enhance the navigational abilities of agents.Further, whilst this work has focused on investigating the benefits of auxiliary tasks for developingthe ability to navigate through end-to-end deep reinforcement learning, it would be interesting forfuture work to compare these techniques with SLAM-based approaches.ACKNOWLEDGEMENTS9Published as a conference paper at ICLR 2017We would like to thank Alexander Pritzel, Thomas Degris and Joseph Modayil for useful discussions,Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environment designand development, and Amir Sadik and Sarah York for expert human game testing.REFERENCESTrevor Barron, Matthew Whitehead, and Alan Yeung. Deep reinforcement learning in a 3-d block-world environment. In Deep Reinforcement Learning: Frontiers and Challenges, IJCAI , 2016.Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich KÃijttler,Andrew Lefrancq, Simon Green, Victor Valdes, Amir Sadik, Julian Schrittwieser, Keith Anderson,Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis,Shane Legg, and Stig Petersen. Deepmind lab. In arXiv , 2016. URL https://arxiv.org/abs/1612.03801 .MWM Gamini Dissanayake, Paul Newman, Steve Clark, Hugh F. Durrant-Whyte, and MichaelCsorba. A solution to the simultaneous localization and map building (slam) problem. IEEETransactions on Robotics and Automation , 17(3):229–241, 2001.David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using amulti-scale deep network. In Proc. of Neural Information Processing Systems, NIPS , 2014.Alex Graves, Mohamed Abdelrahman, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In Proceedings of the International Conference on Acoustics, Speech and SignalProcessing, ICASSP , 2013.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al.Hybrid computing using a neural network with dynamic external memory. Nature , 2016.Matthew J. Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps.Proc. of Conf. on Artificial Intelligence, AAAI , 2015.Max Jaderberg, V olodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, andKoray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Submitted toInt’l Conference on Learning Representations, ICLR , 2017.Jan Koutnik, Giuseppe Cuccu, JÃijrgen Schmidhuber, and Faustino Gomez. Evolving large-scaleneural networks for vision-based reinforcement learning. In Proceedings of the 15th annualconference on Genetic and evolutionary computation, GECCO , 2013.Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successorreinforcement learning. CoRR , abs/1606.02396, 2016. URL http://arxiv.org/abs/1606.02396 .Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcementlearning. CoRR , 2016. URL http://arxiv.org/abs/1609.05521 .Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrentreinforcement learning: A hybrid approach. In Proceedings of the International Conference onLearning Representations, ICLR , 2016. URL https://arxiv.org/abs/1509.03044 .Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Piotr Mirowski, Marc’Aurelio Ranzato, and Yann LeCun. Dynamic auto-encoders for semanticindexing. In NIPS Deep Learning and Unsupervised Learning Workshop , 2010.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518:529–533, 2015.V olodymyr Mnih, Adrià ̆a Puigdomà ́lnech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. In Proc. of Int’l Conf. on Machine Learning, ICML , 2016.10Published as a conference paper at ICLR 2017Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, et al. Massivelyparallel methods for deep reinforcement learning. In Proceedings of the International Conferenceon Machine Learning Deep Learning Workshop, ICML , 2015.Karthik Narasimhan, Tejas D. Kulkarni, and Regina Barzilay. Language understanding for text-basedgames using deep reinforcement learning. In Proc. of Empirical Methods in Natural LanguageProcessing, EMNLP , 2015.Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory,active perception, and action in minecraft. In Proc. of International Conference on MachineLearning, ICML , 2016.David S Olton, James T Becker, and Gail E Handelmann. Hippocampus, space, and memory.Behavioral and Brain Sciences , 2(03):313–322, 1979.Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deeprecurrent neural networks. arXiv preprint arXiv:1312.6026 , 2013.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervisedlearning with ladder networks. In Advances in Neural Information Processing Systems, NIPS ,2015.Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving networkperformance and learning time. In Neural Networks , pp. 120–129. Springer, 1990.Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A frameworkfor temporal abstraction in reinforcement learning. Artificial intelligence , 112(1):181–211, 1999.Lei Tai and Ming Liu. Towards cognitive exploration through deep reinforcement learning for mobilerobots. In arXiv , 2016. URL https://arxiv.org/abs/1610.01733 .Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deephierarchical approach to lifelong learning in minecraft. CoRR , abs/1604.07255, 2016. URLhttp://arxiv.org/abs/1604.07255 .Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5 – rmsprop: Divide the gradient by a runningaverage of its recent magnitude. In Coursera: Neural Networks for Machine Learning , volume 4,2012.A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. 2016.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprintarXiv:1410.3916 , 2014.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In Proc. of International Conference onMachine Learning, ICML , 2016.Junbo Zhao, Michaël Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders.Int’l Conf. on Learning Representations (Workshop), ICLR , 2015. URL http://arxiv.org/abs/1506.02351 .Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and AliFarhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning.CoRR , abs/1609.05143, 2016. URL http://arxiv.org/abs/1609.05143 .11Published as a conference paper at ICLR 2017Supplementary MaterialA V IDEOS OF TRAINED NAVIGATION AGENTSWe show the behaviour of Nav A3C*+ D1Lagent in 5 videos, corresponding to the 5 navigationenvironments: I-maze5, (small) static maze6, (large) static maze7, (small) random goal maze8and(large) random goal maze9. Each video shows a high-resolution video (the actual inputs to the agentare down-sampled to 84 84 RGB images), the value function over time (with fruit reward and goalacquisitions), the layout of the mazes with consecutive trajectories of the agent marked in differentcolours and the output of the trained position decoder, overlayed on top of the maze layout.B N ETWORK ARCHITECTURE AND TRAININGB.1 T HE ONLINE MULTI -LEARNER ALGORITHM FOR MULTI -TASK LEARNINGWe introduce a class of neural network-based agents that have modular structures and that are trainedon multiple tasks, with inputs coming from different modalities (vision, depth, past rewards and pastactions). Implementing our agent architecture is simplified by its modular nature. Essentially, weconstruct multiple networks, one per task, using shared building blocks, and optimise these networksjointly. Some modules, such as the conv-net used for perceiving visual inputs, or the LSTMs used forlearning the navigation policy, are shared among multiple tasks, while other modules, such as depthpredictorgdor loop closure predictor gl, are task-specific. The navigation network that outputs thepolicy and the value function is trained using reinforcement learning, while the depth prediction andloop closure prediction networks are trained using self-supervised learning.Within each thread of the asynchronous training environment, the agent plays on its own episode ofthe game environment, and therefore sees observation and reward pairs f(st;rt)gand takes actionsthat are different from those experienced by agents from the other, parallel threads. Within a thread,the multiple tasks (navigation, depth and loop closure prediction) can be trained at their own schedule,and they add gradients to the shared parameter vector as they arrive. Within each thread, we use aflag-based system to subordinate gradient updates to the A3C reinforcement learning procedure.B.2 N ETWORK AND TRAINING DETAILSFor all the experiments we use an encoder model with 2 convolutional layers followed by a fullyconnected layer, or recurrent layer(s), from which we predict the policy and value function. Thearchitecture is similar to the one in (Mnih et al., 2016). The convolutional layers are as follows. Thefirst convolutional layer has a kernel of size 8x8 and a stride of 4x4, and 16 feature maps. The secondlayer has a kernel of size 4x4 and a stride of 2x2, and 32 feature maps. The fully connected layer,in the FF A3C architecture in Figure 2a has 256 hidden units (and outputs visual features ft). TheLSTM in the LSTM A3C architecture has 256 hidden units (and outputs LSTM hidden activations ht).The LSTMs in Figure 2c and 2d are fed extra inputs (past reward rt1, previous action atexpressedas a one-hot vector of dimension 8 and agent-relative lateral and rotational velocity vtencoded by a6-dimensional vector), which are all concatenated to vector ft. The Nav A3C architectures (Figure2c,d) have a first LSTM with 64 or 128 hiddens and a second LSTM with 256 hiddens. The depthpredictor modules gd,g0dand the loop closure detection module glare all single-layer MLPs with 128hidden units. The depth MLPs are followed by 64 independent 8-dimensional softmax outputs (oneper depth pixel). The loop closure MLP is followed by a 2-dimensional softmax output. We illustrateon Figure 8 the architecture of the Nav A3C+D+L+Dr agent.Depth is taken as the Z-buffer from the Labyrinth environment (with values between 0 and 255),divided by 255 and taken to power 10 to spread the values in interval [0;1]. We empirically decidedto use the following quantization: f0;0:05;0:175;0:3;0:425;0:55;0:675;0:8;1gto ensure a uniform5Video of the Nav A3C*+ D1Lagent on the I-maze: https://youtu.be/PS4iJ7Hk_BU6Video of the Nav A3C*+ D1Lagent on static maze 1: https://youtu.be/-HsjQoIou_c7Video of the Nav A3C*+ D1Lagent on static maze 2: https://youtu.be/kH1AvRAYkbI8Video of the Nav A3C*+ D1Lagent on random goal maze 1: https://youtu.be/5IBT2UADJY09Video of the Nav A3C*+ D1Lagent on random goal maze 2: https://youtu.be/e10mXgBG9yo1Published as a conference paper at ICLR 2017168x8/4x4384x84324x4/2x2256128128264x86425616881xtvtat1rt1⇡Vfthtgl(ht)gd(ft)12864x8gd(ft)gl(ht)’ Figure 8: Details of the architecture of the Nav A3C+D+L+Dr agent, taking in RGB visual inputs xt, pastrewardrt1, previous action at1as well as agent-relative velocity vt, and producing policy , value functionV, depth predictions gd(ft)andg0d(ht)as well as loop closure detection gl(ht).binning across 8 classes. The previous version of the agent had a single depth prediction MLP gdforregressing 816 = 128 depth pixels from the convnet outputs ft.The parameters of each of the modules point to a subset of a common vector of parameters. Weoptimise these parameters using an asynchronous version of RMSProp (Tieleman & Hinton, 2012).(Nair et al., 2015) was a recent example of asynchronous and parallel gradient updates in deepreinforcement learning; in our case, we focus on the specific Asynchronous Advantage Actor Critic(A3C) reinforcement learning procedure in (Mnih et al., 2016).Learning follows closely the paradigm described in (Mnih et al., 2016). We use 16 workers and thesame RMSProp algorithm without momentum or centering of the variance. Gradients are computedover non-overlaping chunks of the episode. The score for each point of a training curve is the averageover all the episodes the model gets to finish in 5e4environment steps.The whole experiments are run for a maximum of 1e8environment step. The agent has an actionrepeat of 4 as in (Mnih et al., 2016), which means that for 4 consecutive steps the agent will use thesame action picked at the beginning of the series. For this reason through out the paper we actuallyreport results in terms of agent perceived steps rather than environment steps. That is, the maximalnumber of agent perceived step that we do for any particular run is 2:5e7.In our grid we sample hyper-parameters from categorical distributions:Learning rate was sampled from [104;5104].Strength of the entropy regularization from [104;103].Rewards were not scaled and not clipped in the new set of experiments. In our previous setof experiments, rewards were scaled by a factor from f0:3;0:5gand clipped to 1 prior toback-propagation in the Advantage Actor-Critic algorithm.Gradients are computed over non-overlaping chunks of 50 or 75 steps of the episode. In ourprevious set of experiments, we used chunks of 100 steps.The auxiliary tasks, when used, have hyperparameters sampled from:Coefficient dof the depth prediction loss from convnet features Ldsampled fromf3:33;10;33g.Coefficient 0dof the depth prediction loss from LSTM hiddens Ld0sampled fromf1;3:33;10g.Coefficientlof the loop closure prediction loss Llsampled fromf1;3:33;10g.Loop closure uses the following thresholds: maximum distance for position similarity 1= 1squareand minimum distance for removing trivial loop-closures 2= 2squares.2Published as a conference paper at ICLR 2017(a)Random Goal maze (small): comparison of reward clipping (b)Random Goal maze (small): comparison of depth predictionFigure 9: Results are averaged over the top 5 random hyperparameters for each agent-task configuration. Star inthe label indicates the use of reward clipping. Please see text for more details.C A DDITIONAL RESULTSC.1 R EWARD CLIPPINGFigure 9 shows additional learning curves. In particular in the left plot we show that the baselines(A3C FF and A3C LSTM) as well as Nav A3C agent without auxiliary losses, perform worse withoutreward clipping than with reward clipping. It seems that removing reward clipping makes learningunstable in absence of auxiliary tasks. For this particular reason we chose to show the baselines withreward clipping in our main results.C.2 D EPTH PREDICTION AS REGRESSION OR CLASSIFICATION TASKSThe right subplot of Figure 9 compares having depth as an input versus as a target. Note that usingRGBD inputs to the Nav A3C agent performs even worse than predicting depth as a regression task,and in general is worse than predicting depth as a classification task.C.3 N ON-NAVIGATION TASKS IN 3D MAZE ENVIRONMENTSWe have evaluated the behaviour of the agents introduced in this paper, as well as agents withreward prediction, introduced in (Jaderberg et al., 2017) (Nav A3C*+ R) and with a combination ofreward prediction from the convnet and depth prediction from the policy LSTM (Nav A3C+ RD 2),on different 3D maze environments with non-navigation specific tasks. In the first environment,Seek-Avoid Arena, there are apples (yielding 1 point) and lemons (yielding -1 point) disposed inan arena, and the agents needs to pick all the apples before respawning; episodes last 20 seconds.The second environment, Stairway to Melon, is a thin square corridor; in one direction, there is alemon followed by a stairway to a melon (10 points, resets the level) and in the other direction are7 apples and a dead end, with the melon visible but not reachable. The agent spawns between thelemon and the apples with a random orientation. Both environments have been released in DeepMindLab (Beattie et al., 2016). These environments do not require navigation skills such as shortest pathplanning, but a simple reward identification (lemon vs. apple or melon) and persistent exploration.As Figure 10 shows, there is no major difference between auxiliary tasks related to depth predictionor reward prediction. Depth prediction boosts the performance of the agent beyond that of the stackedLSTM architecture, hinting at a more general applicability of depth prediction beyond navigationtasks.C.4 S ENSITIVITY TOWARDS HYPER -PARAMETER SAMPLINGFor each of the experiments in this paper, 64 replicas were run with hyperparameters (learning rate,entropy cost) sampled from the same interval. Figure 11 shows that the Nav architectures with3Published as a conference paper at ICLR 2017(a)Seek-Avoid (learning curves) (b)Stairway to Melon (learning curves)(c)Seek-Avoid (layout) (d)Stairway to Melon (layout)Figure 10: Comparison of agent architectures over non-navigation maze configurations, Seek-Avoid Arena andStairway to Melon, described in details in (Beattie et al., 2016). Image credits for (c) and (d): (Jaderberg et al.,2017).(a)Static maze (small) (b)Random Goal maze (large) (c)Random Goal I-mazeFigure 11: Plot of the Area Under the Curve (AUC) of the rewards achieved by the agents, across differentexperiments and on 3 different tasks: large static maze with fixed goals, large static maze with comparable layoutbut with dynamic goals, and the I-maze. The reward AUC values are computed for each replica; 64 replicaswere run per experiment and the reward AUC values are sorted by decreasing value.auxiliary tasks achieve higher results for a comparatively larger number of replicas, hinting at the factthat auxiliary tasks make learning more robust to the choice of hyperparameters.C.5 A SYMPTOTIC PERFORMANCE OF THE AGENTSFinally, we compared the asymptotic performance of the agents, both in terms of navigation (finalrewards obtained at the end of the episode) and in terms of their representation in the policy LSTM.Rather than visualising the convolutional filters, we quantify the change in representation, with and4Published as a conference paper at ICLR 2017Agent architectureFrames Performance LSTM A3C* Nav A3C+ D2120M Score (mean top 5) 57 103Position Acc 33.4 72.4240M Score (mean top 5) 90 114Position Acc 64.1 80.6Table 3: Asymptotic performance analysis of two agents in the Random Goal 2 maze, comparing training for120M Labyrinth frames vs. 240M frames.without auxiliary task, in terms of position decoding, following the approach explained in Section 5.1.Specifically, we compare the baseline agent (LSTM A3C*) to a navigation agent with one auxiliarytask (depth prediction), that gets about twice as many gradient updates for the same number of framesseen in the environment: once for the RL task and once for the auxiliary depth prediction task. AsTable 3 shows, the performance of the baseline agent as well as the position decoding accuracy dosignificantly increase after twice the number of training steps (going from 57 points to 90 points, andfrom 33.4% to 66.5%, but do not reach the performance and position decoding accuracy of the NavA3C+D2agent after half the number of training frames. For this reason, we believe that the auxiliarytask do more than simply accelerate training.5
S1LSCjZNg
SJMGPrcle
ICLR.cc/2017/conference/-/paper254/official/review
{"title": "Review", "rating": "7: Good paper, accept", "review": "This relatively novel work proposes to augment current RL models by adding self-supervised tasks encouraging better internal representations. \nThe proposed tasks are depth prediction and loop closure detection. While these tasks assume a 3D environment as well some position information, such priors are well suited to a large variety of tasks pertaining to navigation and robotics.\n\nExtensive experiments suggest to incorporating such auxiliary tasks increase performance and to a large extent learning speed.\nAdditional analysis of value functions and internal representations suggest that some structure is being discovered by the model, which would not be without the auxiliary tasks.\n\n\nWhile specific to 3D-environment tasks, this work provides additional proof that using input data in addition to sparse external reward signals helps to boost learning speed as well as learning better internal representation. It is original, clearly presented, and strongly supported by empirical evidence.\n\nOne small downside of the experimental method (or maybe just the results shown) is that by picking top-5 runs, it is hard to judge whether such a model is better suited to the particular hyperparameter range that was chosen, or is simply more robust to these hyperparameter settings. Maybe an analysis of performance as a function of hyperparameters would help confirm the superiority of the approach to the baselines. My own suspicion is that adding auxiliary tasks would make the model robust to bad hyperparameters.\n\nAnother downside is that the authors dismiss navigation literature as \"not RL\". I sympathize with the limit on the number of things that can fit in a paper, but some experimental comparison with such literature may have proven insightful, if just in measuring the quality of the learned representations.\n", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Learning to Navigate in Complex Environments
["Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andy Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell"]
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision.
["Deep learning", "Reinforcement Learning"]
https://openreview.net/forum?id=SJMGPrcle
https://openreview.net/pdf?id=SJMGPrcle
https://openreview.net/forum?id=SJMGPrcle&noteId=S1LSCjZNg
Published as a conference paper at ICLR 2017LEARNING TO NAVIGATEINCOMPLEX ENVIRONMENTSPiotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard,Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu,Dharshan Kumaran, Raia HadsellDeepMindLondon, UK{piotrmirowski, razp, fviola, soyer, aybd, abanino, mdenil, goroshin, sifre,korayk, dkumaran, raia} @google.comABSTRACTLearning to navigate in complex environments with dynamic elements is an impor-tant milestone in developing AI agents. In this work we formulate the navigationquestion as a reinforcement learning problem and show that data efficiency and taskperformance can be dramatically improved by relying on additional auxiliary tasksleveraging multimodal sensory inputs. In particular we consider jointly learningthe goal-driven reinforcement learning problem with auxiliary depth predictionand loop closure classification tasks. This approach can learn to navigate from rawsensory input in complicated 3D mazes, approaching human-level performanceeven under conditions where the goal location changes frequently. We providedetailed analysis of the agent behaviour1, its ability to localise, and its networkactivity dynamics, showing that the agent implicitly learns key navigation abilities.1 I NTRODUCTIONThe ability to navigate efficiently within an environment is fundamental to intelligent behavior.Whilst conventional robotics methods, such as Simultaneous Localisation and Mapping (SLAM),tackle navigation through an explicit focus on position inference and mapping (Dissanayake et al.,2001), here we follow recent work in deep reinforcement learning (Mnih et al., 2015; 2016) andpropose that navigational abilities could emerge as the by-product of an agent learning a policythat maximizes reward. One advantage of an intrinsic, end-to-end approach is that actions are notdivorced from representation, but rather learnt together, thus ensuring that task-relevant features arepresent in the representation. Learning to navigate from reinforcement learning in partially observableenvironments, however, poses several challenges.First, rewards are often sparsely distributed in the environment, where there may be only one goallocation. Second, environments often comprise dynamic elements, requiring the agent to use memoryat different timescales: rapid one-shot memory for the goal location, together with short term memorysubserving temporal integration of velocity signals and visual observations, and longer term memoryfor constant aspects of the environment (e.g. boundaries, cues).To improve statistical efficiency we bootstrap the reinforcement learning procedure by augmentingour loss with auxiliary tasks that provide denser training signals that support navigation-relevantrepresentation learning. We consider two additional losses: the first one involves reconstruction of alow-dimensional depth map at each time step by predicting one input modality (the depth channel)from others (the colour channels). This auxiliary task concerns the 3D geometry of the environment,and is aimed to encourage the learning of representations that aid obstacle avoidance and short-termtrajectory planning. The second task directly invokes loop closure from SLAM: the agent is trainedto predict if the current location has been previously visited within a local trajectory.Denotes equal contribution1A video illustrating the navigation agents is available at: https://youtu.be/lNoaTyMZsWI1Published as a conference paper at ICLR 2017Figure 1: Views from a small 510maze, a large 915maze and an I-maze, with corresponding maze layoutsand sample agent trajectories. The mazes, which will be made public, have different textures and visual cues aswell as exploration rewards and goals (shown right).To address the memory requirements of the task we rely on a stacked LSTM architecture (Graveset al., 2013; Pascanu et al., 2013). We evaluate our approach using five 3D maze environments anddemonstrate the accelerated learning and increased performance of the proposed agent architecture.These environments feature complex geometry, random start position and orientation, dynamic goallocations, and long episodes that require thousands of agent steps (see Figure 1). We also providedetailed analysis of the trained agent to show that critical navigation skills are acquired. This isimportant as neither position inference nor mapping are directly part of the loss; therefore, rawperformance on the goal finding task is not necessarily a good indication that these skills are acquired.In particular, we show that the proposed agent resolves ambiguous observations and quickly localizesitself in a complex maze, and that this localization capability is correlated with higher task reward.2 A PPROACHWe rely on a end-to-end learning framework that incorporates multiple objectives. Firstly it tries tomaximize cumulative reward using an actor-critic approach. Secondly it minimizes an auxiliary lossof inferring the depth map from the RGB observation. Finally, the agent is trained to detect loopclosures as an additional auxiliary task that encourages implicit velocity integration.The reinforcement learning problem is addressed with the Asynchronous Advantage Actor-Critic(A3C) algorithm (Mnih et al., 2016) that relies on learning both a policy (atjst;)and value functionV(st;V)given a state observation st. Both the policy and value function share all intermediaterepresentations, both being computed using a separate linear layer from the topmost layer of themodel. The agent setup closely follows the work of (Mnih et al., 2016) and we refer to this work forthe details (e.g. the use of a convolutional encoder followed by either an MLP or an LSTM, the useof action repetition, entropy regularization to prevent the policy saturation, etc.). These details canalso be found in the Appendix B.The baseline that we consider in this work is an A3C agent (Mnih et al., 2016) that receives only RGBinput from the environment, using either a recurrent or a purely feed-forward model (see Figure 2a,b).The encoder for the RGB input (used in all other considered architectures) is a 3 layer convolutionalnetwork. To support the navigation capability of our approach, we also rely on the Nav A3C agent(Figure 2c) which employs a two-layer stacked LSTM after the convolutional encoder. We expand theobservations of the agents to include agent-relative velocity, the action sampled from the stochasticpolicy and the immediate reward, from the previous time step. We opt to feed the velocity andpreviously selected action directly to the second recurrent layer, with the first layer only receiving thereward. We postulate that the first layer might be able to make associations between reward and visualobservations that are provided as context to the second layer from which the policy is computed.Thus, the observation stmay include an image xt2R3WH(whereWandHare the width and2Published as a conference paper at ICLR 2017xt rt-1 { vt, at-1}encρᬭxtencρᬭencρᬭLoop (L)Depth (D1 )a. FF A3C c. Nav A3C d. Nav A3C +D1D2Lxt rt-1 { vt, at-1}encρᬭxtb. LSTM A3C Depth (D2 )Figure 2: Different architectures: (a) is a convolutional encoder followed by a feedforward layer and policy ( )and value function outputs; (b) has an LSTM layer; (c) uses additional inputs (agent-relative velocity, reward,and action), as well as a stacked LSTM; and (d) has additional outputs to predict depth and loop closures.height of the image), the agent-relative lateral and rotational velocity vt2R6, the previous actionat12RNA, and the previous reward rt12R.Figure 2d shows the augmentation of the Nav A3C with the different possible auxiliary losses. Inparticular we consider predicting depth from the convolutional layer (we will refer to this choiceasD1), or from the top LSTM layer ( D2) or predicting loop closure ( L). The auxiliary losses arecomputed on the current frame via a single layer MLP. The agent is trained by applying a weightedsum of the gradients coming from A3C, the gradients from depth prediction (multiplied with d1;d2)and the gradients from the loop closure (scaled by l). More details of the online learning algorithmare given in Appendix B.2.1 D EPTH PREDICTIONThe primary input to the agent is in the form of RGB images. However, depth information, coveringthe central field of view of the agent, might supply valuable information about the 3D structure ofthe environment. While depth could be directly used as an input, we argue that if presented as anadditional loss it is actually more valuable to the learning process. In particular if the predictionloss shares representation with the policy, it could help build useful features for RL much faster,bootstrapping learning. Since we know from (Eigen et al., 2014) that a single frame can be enough topredict depth, we know this auxiliary task can be learnt. A comparison between having depth as inputversus as an additional loss is given in Appendix C, which shows significant gain for depth as a loss.Since the role of the auxiliary loss is just to build up the representation of the model, we do notnecessarily care about the specific performance obtained or nature of the prediction. We do careabout the data efficiency aspect of the problem and also computational complexity. If the loss is to beuseful for the main task, we should converge faster on it compared to solving the RL problem (usingless data samples), and the additional computational cost should be minimal. To achieve this we usea low resolution variant of the depth map, reducing the screen resolution to 4x16 pixels2.We explore two different variants for the loss. The first choice is to phrase it as a regression task, themost natural choice. While this formulation, combined with a higher depth resolution, extracts themost information, mean square error imposes a unimodal distribution (van den Oord et al., 2016).To address this possible issue, we also consider a classification loss, where depth at each positionis discretised into 8 different bands. The bands are non-uniformally distributed such that we paymore attention to far-away objects (details in Appendix B). The motivation for the classificationformulation is that while it greatly reduces the resolution of depth, it is more flexible from a learningperspective and can result in faster convergence (hence faster bootstrapping).2The image is cropped before being subsampled to lessen the floor and ceiling which have little relevantdepth information.3Published as a conference paper at ICLR 20172.2 L OOP CLOSURE PREDICTIONLoop closure, like depth, is valuable for a navigating agent, since can be used for efficient explorationand spatial reasoning. To produce the training targets, we detect loop closures based on the similarityof local position information during an episode, which is obtained by integrating 2D velocity overtime. Specifically, in a trajectory noted fp0;p1;:::;p Tg, whereptis the position of the agent at timet, we define a loop closure label ltthat is equal to 1 if the position ptof the agent is close to thepositionpt0at an earlier time t0. In order to avoid trivial loop closures on consecutive points of thetrajectory, we add an extra condition on an intermediary position pt00being far from pt. Thresholds 1and2provide these two limits. Learning to predict the binary loop label is done by minimizing theBernoulli lossLlbetweenltand the output of a single-layer output from the hidden representation htof the last hidden layer of the model, followed by a sigmoid activation.3 R ELATED WORKThere is a rich literature on navigation, primarily in the robotics literature. However, here we focus onrelated work in deep RL. Deep Q-networks (DQN) have had breakthroughs in extremely challengingdomains such as Atari (Mnih et al., 2015). Recent work has developed on-policy RL methods suchas advantage actor-critic that use asynchronous training of multiple agents in parallel (Mnih et al.,2016). Recurrent networks have also been successfully incorporated to enable state disambiguationin partially observable environments (Koutnik et al., 2013; Hausknecht & Stone, 2015; Mnih et al.,2016; Narasimhan et al., 2015).Deep RL has recently been used in the navigation domain. Kulkarni et al. (2016) used a feedforwardarchitecture to learn deep successor representations that enabled behavioral flexibility to rewardchanges in the MazeBase gridworld, and provided a means to detect bottlenecks in 3D VizDoom.Zhu et al. (2016) used a feedforward siamese actor-critic architecture incorporating a pretrainedResNet to support navigation to a target in a discretised 3D environment. Oh et al. (2016) investigatedthe performance of a variety of networks with external memory (Weston et al., 2014) on simplenavigation tasks in the Minecraft 3D block world environment. Tessler et al. (2016) also used theMinecraft domain to show the benefit of combining feedforward deep-Q networks with the learningof resuable skill modules (cf options: (Sutton et al., 1999)) to transfer between navigation tasks. Tai &Liu (2016) trained a convnet DQN-based agent using depth channel inputs for obstacle avoidance in3D environments. Barron et al. (2016) investigated how well a convnet can predict the depth channelfrom RGB in the Minecraft environment, but did not use depth for training the agent.Auxiliary tasks have often been used to facilitate representation learning (Suddarth & Kergosien,1990). Recently, the incorporation of additional objectives, designed to augment representationlearning through auxiliary reconstructive decoding pathways (Zhang et al., 2016; Rasmus et al., 2015;Zhao et al., 2015; Mirowski et al., 2010), has yielded benefits in large scale classification tasks. Indeep RL settings, however, only two previous papers have examined the benefit of auxiliary tasks.Specifically, Li et al. (2016) consider a supervised loss for fitting a recurrent model on the hiddenrepresentations to predict the next observed state, in the context of imitation learning of sequencesprovided by experts, and Lample & Chaplot (2016) show that the performance of a DQN agent in afirst-person shooter game in the VizDoom environment can be substantially enhanced by the additionof a supervised auxiliary task, whereby the convolutional network was trained on an enemy-detectiontask, with information about the presence of enemies, weapons, etc., provided by the game engine.In contrast, our contribution addresses fundamental questions of how to learn an intrinsic repre-sentation of space, geometry, and movement while simultaneously maximising rewards throughreinforcement learning. Our method is validated in challenging maze domains with random start andgoal locations.4 E XPERIMENTSWe consider a set of first-person 3D mazes from the DeepMind Lab environment (Beattie et al., 2016)(see Fig. 1) that are visually rich, with additional observations available to the agent such as inertial4Published as a conference paper at ICLR 2017(a)Static maze (small) (b)Static maze (large) (c)Random Goal I-maze(d)Random Goal maze (small) (e)Random Goal maze (large) (f)Random Goal maze (large): different formu-lation of depth predictionFigure 3: Rewards achieved by the agents on 5 different tasks: two static mazes (small and large) with fixedgoals, two static mazes with comparable layout but with dynamic goals and the I-maze. Results are averagedover the top 5 random hyperparameters for each agent-task configuration. Star in the label indicates the use ofreward clipping. Please see text for more details.information and local depth information.3The action space is discrete, yet allows finegrained control,comprising 8 actions: the agent can rotate in small increments, accelerate forward or backward orsideways, or induce rotational acceleration while moving. Reward is achieved in these environmentsby reaching a goal from a random start location and orientation. If the goal is reached, the agent isrespawned to a new start location and must return to the goal. The episode terminates when a fixedamount of time expires, affording the agent enough time to find the goal several times. There aresparse ‘fruit’ rewards which serve to encourage exploration. Apples are worth 1 point, strawberries 2points and goals are 10 points. Videos of the agent solving the maze are linked in Appendix A.In the static variant of the maze, the goal and fruit locations are fixed and only the agent’s startlocation changes. In the dynamic (Random Goal) variant, the goal and fruits are randomly placed onevery episode. Within an episode, the goal and apple locations stay fixed until the episode ends. Thisencourages an explore-exploit strategy, where the agent should initially explore the maze, then retainthe goal location and quickly refind it after each respawn. For both variants (static and random goal)we consider a small and large map. The small mazes are 510and episodes last for 3600 timesteps,and the large mazes are 915with 10800 steps (see Figure 1). The RGB observation is 8484.The I-Maze environment (see Figure 1, right) is inspired by the classic T-maze used to investigatenavigation in rodents (Olton et al., 1979): the layout remains fixed throughout, the agent spawns inthe central corridor where there are apple rewards and has to locate the goal which is placed in thealcove of one of the four arms. Because the goal is hidden in the alcove, the optimal agent behaviourmust rely on memory of the goal location in order to return to the goal using the most direct route.Goal location is constant within an episode but varies randomly across episodes.The different agent architectures described in Section 2 are evaluated by training on the five mazes.Figure 3 shows learning curves (averaged over the 5 top performing agents). The agents are afeedforward model (FF A3C), a recurrent model (LSTM A3C), the stacked LSTM version withvelocity, previous action and reward as input (Nav A3C), and Nav A3C with depth prediction fromthe convolution layer (Nav A3C+ D1), Nav A3C with depth prediction from the last LSTM layer(Nav A3C+D2), Nav A3C with loop closure prediction (Nav A3C+ L) as well as the Nav A3C with3The environments used in this paper are publicly available at https://github.com/deepmind/lab .5Published as a conference paper at ICLR 2017Figure 4: left: Example of depth predictions (pairs of ground truth and predicted depths), sampled every 40 steps.right: Example of loop closure prediction. The agent starts at the gray square and the trajectory is plotted ingray. Blue dots correspond to true positive outputs of the loop closure detector; red cross correspond to falsepositives and green cross to false negatives. Note the false positives that occur when the agent is actually a fewsquares away from actual loop closure.all auxiliary losses considered together (Nav A3C+ D1D2L). In each case we ran 64 experimentswith randomly sampled hyper-parameters (for ranges and details please see the appendix). The meanover the top 5 runs as well as the top 5 curves are plotted. Expert human scores, established by aprofessional game player, are compared to these results. The Nav A3C+ D2agents reach human-levelperformance on Static 1 and 2, and attain about 91% and 59% of human scores on Random Goal 1and 2.In Mnih et al. (2015) reward clipping is used to stabilize learning, technique which we employed inthis work as well. Unfortunately, for these particular tasks, this yields slightly suboptimal policiesbecause the agent does not distinguish apples (1 point) from goals (10 points). Removing the rewardclipping results in unstable behaviour for the base A3C agent (see Appendix C). However it seemsthat the auxiliary signal from depth prediction mediates this problem to some extent, resulting instable learning dynamics (e.g. Figure 3f, Nav A3C+ D1vs Nav A3C*+ D1). We clearly indicatewhether reward clipping is used by adding an asterisk to the agent name.Figure 3f also explores the difference between the two formulations of depth prediction, as a regressiontask or a classification task. We can see that the regression agent (Nav A3C*+ D1[MSE]) performsworse than one that does classification (Nav A3C*+ D1). This result extends to other maps, andwe therefore only use the classification formulation in all our other results4. Also we see thatpredicting depth from the last LSTM layer (hence providing structure to the recurrent layer, not justthe convolutional ones) performs better.We note some particular results from these learning curves. In Figure 3 (a and b), consider thefeedforward A3C model (red curve) versus the LSTM version (pink curve). Even though navigationseems to intrinsically require memory, as single observations could often be ambiguous, the feed-forward model achieves competitive performance on static mazes. This suggest that there might begood strategies that do not involve temporal memory and give good results, namely a reactive policyheld by the weights of the encoder, or learning a wall-following strategy. This motivates the dynamicenvironments that encourage the use of memory and more general navigation strategies.Figure 3 also shows the advantage of adding velocity, reward and action as an input, as well as theimpact of using a two layer LSTM (orange curve vs red and pink). Though this agent (Nav A3C)is better than the simple architectures, it is still relatively slow to train on all of the mazes. Webelieve that this is mainly due to the slower, data inefficient learning that is generally seen in pureRL approaches. Supporting this we see that adding the auxiliary prediction targets of depth andloop closure (Nav A3C+ D1D2L, black curve) speeds up learning dramatically on most of the mazes(see Table 1: AUC metric). It has the strongest effect on the static mazes because of the acceleratedlearning, but also gives a substantial and lasting performance increase on the random goal mazes.Although we place more value on the task performance than on the auxiliary losses, we report theresults from the loop closure prediction task. Over 100 test episodes of 2250 steps each, within alarge maze (random goal 2), the Nav A3C*+ D1Lagent demonstrated very successful loop detection,reaching an F-1 score of 0.83. A sample trajectory can be seen in Figure 4 (right).4An exception is the Nav A3C*+ D1Lagent on the I-maze (Figure 3c), which uses depth regression andreward clipping. While it does worse, we include it because some analysis is based on this agent.6Published as a conference paper at ICLR 2017Mean over top 5 agents Highest reward agentMaze Agent AUC Score % Human Goals Position Acc Latency 1:>1 ScoreI-Maze FF A3C* 75.5 98 - 94/100 42.2 9.3s:9.0s 102LSTM A3C* 112.4 244 - 100/100 87.8 15.3s:3.2s 203Nav A3C*+ D1L 169.7 266 - 100/100 68.5 10.7s:2.7s 252Nav A3C+ D2 203.5 268 - 100/100 62.3 8.8s:2.5s 269Nav A3C+ D1D2L 199.9 258 - 100/100 61.0 9.9s:2.5s 251Static 1 FF A3C* 41.3 79 83 100/100 64.3 8.8s:8.7s 84LSTM A3C* 44.3 98 103 100/100 88.6 6.1s:5.9s 110Nav A3C+ D2 104.3 119 125 100/100 95.4 5.9s:5.4s 122Nav A3C+ D1D2L 102.3 116 122 100/100 94.5 5.9s:5.4s 123Static 2 FF A3C* 35.8 81 47 100/100 55.6 24.2s:22.9s 111LSTM A3C* 46.0 153 91 100/100 80.4 15.5s:14.9s 155Nav A3C+ D2 157.6 200 116 100/100 94.0 10.9s:11.0s 202Nav A3C+ D1D2L 156.1 192 112 100/100 92.6 11.1s:12.0s 192Random Goal 1 FF A3C* 37.5 61 57.5 88/100 51.8 11.0:9.9s 64LSTM A3C* 46.6 65 61.3 85/100 51.1 11.1s:9.2s 66Nav A3C+ D2 71.1 96 91 100/100 85.5 14.0s:7.1s 91Nav A3C+ D1D2L 64.2 81 76 81/100 83.7 11.5s:7.2s 74.6Random Goal 2 FF A3C* 50.0 69 40.1 93/100 30.0 27.3s:28.2s 77LSTM A3C* 37.5 57 32.6 74/100 33.4 21.5s:29.7s 51.3Nav A3C*+ D1L 62.5 90 52.3 90/100 51.0 17.9s:18.4s 106Nav A3C+ D2 82.1 103 59 79/100 72.4 15.4s:15.0s 109Nav A3C+ D1D2L 78.5 91 53 74/100 81.5 15.9s:16.0s 102Table 1: Comparison of four agent architectures over five maze configurations, including random and staticgoals. AUC (Area under learning curve), Score , and % Human are averaged over the best 5 hyperparameters.Evaluation of a single best performing agent is done through analysis on 100 test episodes. Goals gives thenumber of episodes where the goal was reached one more more times. Position Accuracy is the classificationaccuracy of the position decoder. Latency 1:>1 is the average time to the first goal acquisition vs. the averagetime to all subsequent goal acquisitions. Score is the mean score over the 100 test episodes.5 A NALYSIS5.1 P OSITION DECODINGIn order to evaluate the internal representation of location within the agent (either in the hidden unitshtof the last LSTM, or, in the case of the FF A3C agent, in the features fton the last layer of theconv-net), we train a position decoder that takes that representation as input, consisting of a linearclassifier with multinomial probability distribution over the discretized maze locations. Small mazes(510) have 50 locations, large mazes ( 915) have 135 locations, and the I-maze has 77 locations.Note that we do not backpropagate the gradients from the position decoder through the rest of thenetwork. The position decoder can only see the representation exposed by the model, not change it.An example of position decoding by the Nav A3C+ D2agent is shown in Figure 6, where the initialuncertainty in position is improved to near perfect position prediction as more observations areacquired by the agent. We observe that position entropy spikes after a respawn, then decreases oncethe agent acquires certainty about its location. Additionally, videos of the agent’s position decodingare linked in Appendix A. In these complex mazes, where localization is important for the purpose ofreaching the goal, it seems that position accuracy and final score are correlated, as shown in Table1. A pure feed-forward architecture still achieves 64.3% accuracy in a static maze with static goal,suggesting that the encoder memorizes the position in the weights and that this small maze is solvableby all the agents, with sufficient training time. In Random Goal 1, it is Nav A3C+ D2that achievesthe best position decoding performance (85.5% accuracy), whereas the FF A3C and the LSTM A3Carchitectures are at approximately 50%.In the I-maze, the opposite branches of the maze are nearly identical, with the exception of verysparse visual cues. We observe that once the goal is first found, the Nav A3C*+ D1Lagent is capableof directly returning to the correct branch in order to achieve the maximal score. However, the linearposition decoder for this agent is only 68.5% accurate, whereas it is 87.8% in the plain LSTM A3Cagent. We hypothesize that the symmetry of the I-maze will induce a symmetric policy that need notbe sensitive to the exact position of the agent (see analysis below).7Published as a conference paper at ICLR 2017Figure 5: Trajectories of the Nav A3C*+ D1Lagent in the I-maze (left) and of the Nav A3C+ D2random goalmaze 1 (right) over the course of one episode. At the beginning of the episode (gray curve on the map), theagent explores the environment until it finds the goal at some unknown location (red box). During subsequentrespawns (blue path), the agent consistently returns to the goal. The value function, plotted for each episode,rises as the agent approaches the goal. Goals are plotted as vertical red lines.Figure 6: Trajectory of the Nav A3C+ D2agent in the random goal maze 1, overlaid with the position probabilitypredictions predicted by a decoder trained on LSTM hidden activations, taken at 4 steps during an episode.Initial uncertainty gives way to accurate position prediction as the agent navigates.A desired property of navigation agents in our Random Goal tasks is to be able to first find the goal,and reliably return to the goal via an efficient route after subsequent re-spawns. The latency columnin Table 1 shows that the Nav A3C+ D2agents achieve the lowest latency to goal once the goal hasbeen discovered (the first number shows the time in seconds to find the goal the first time, and thesecond number is the average time for subsequent finds). Figure 5 shows clearly how the agent findsthe goal, and directly returns to that goal for the rest of the episode. For Random Goal 2, none of theagents achieve lower latency after initial goal acquisition; this is presumably due to the larger, morechallenging environment.5.2 S TACKED LSTM GOAL ANALYSISFigure 7(a) shows shows the trajectories traversed by an agent for each of the four goal locations.After an initial exploratory phase to find the goal, the agent consistently returns to the goal location.We visualize the agent’s policy by applying tSNE dimension reduction (Maaten & Hinton, 2008)to the cellactivations at each step of the agent for each of the four goal locations. Whilst clusterscorresponding to each of the four goal locations are clearly distinct in the LSTM A3C agent, thereare 2 main clusters in the Nav A3C agent – with trajectories to diagonally opposite arms of the mazerepresented similarly. Given that the action sequence to opposite arms is equivalent (e.g. straight, turnleft twice for top left and bottom right goal locations), this suggests that the Nav A3C policy-dictatingLSTM maintains an efficient representation of 2 sub-policies (i.e. rather than 4 independent policies)– with critical information about the currently relevant goal provided by the additional LSTM.5.3 I NVESTIGATING DIFFERENT COMBINATIONS OF AUXILIARY TASKSOur results suggest that depth prediction from the policy LSTM yields optimal results. However,several other auxiliary tasks have been concurrently introduced in (Jaderberg et al., 2017), and thuswe provide a comparison of reward prediction against depth prediction. Following that paper, weimplemented two additional agent architectures, one performing reward prediction from the convnetusing a replay buffer, called Nav A3C*+ R, and one combining reward prediction from the convnetand depth prediction from the LSTM (Nav A3C+ RD 2). Table 2 suggests that reward prediction (NavA3C*+R) improves upon the plain stacked LSTM architecture (Nav A3C*) but not as much as depthprediction from the policy LSTM (Nav A3C+ D2). Combining reward prediction and depth prediction(Nav A3C+RD 2) yields comparable results to depth prediction alone (Nav A3C+ D2); normalisedaverage AUC values are respectively 0.995 vs. 0.981. Future work will explore other auxiliary tasks.8Published as a conference paper at ICLR 2017(a)Agent trajectories for episodes withdifferent goal locations(b)LSTM activations from A3C agent (c) LSTM activations from NavA3C*+ D1LagentFigure 7: LSTM cell activations of LSTM A3C and Nav A3C*+ D1Lagents from the I-Maze collected overmultiple episodes and reduced to 2 dimensions using tSNE, then coloured to represent the goal location.Policy-dictating LSTM of Nav A3C agent shown.Navigation agent architectureMaze Nav A3C* Nav A3C+ D1 Nav A3C+ D2 Nav A3C+ D1D2 Nav A3C*+ R Nav A3C+ RD2I-Maze 143.3 196.7 203.5 197.2 128.2 191.8Static 1 60.1 103.2 104.3 100.3 86.9 105.1Static 2 59.9 153.1 157.6 151.6 100.6 155.5Random Goal 1 45.5 57.6 71.1 63.2 54.4 72.3Random Goal 2 37.0 66.0 82.1 75.1 68.3 80.1Table 2: Comparison of five navigation agent architectures over five maze configurations with random andstatic goals, including agents performing reward prediction Nav A3C*+ Rand Nav A3C+ RD 2, where rewardprediction is implemented following (Jaderberg et al., 2017). We report the AUC (Area under learning curve),averaged over the best 5 hyperparameters.6 C ONCLUSIONWe proposed a deep RL method, augmented with memory and auxiliary learning targets, for trainingagents to navigate within large and visually rich environments that include frequently changingstart and goal locations. Our results and analysis highlight the utility of un/self-supervised auxiliaryobjectives, namely depth prediction and loop closure, in providing richer training signals that bootstraplearning and enhance data efficiency. Further, we examine the behavior of trained agents, their abilityto localise, and their network activity dynamics, in order to analyse their navigational abilities.Our approach of augmenting deep RL with auxiliary objectives allows end-end learning and mayencourage the development of more general navigation strategies. Notably, our work with auxiliarylosses is related to (Jaderberg et al., 2017) which independently looks at data efficiency whenexploiting auxiliary losses. One difference between the two works is that our auxiliary losses areonline (for the current frame) and do not rely on any form of replay. Also the explored losses are verydifferent in nature. Finally our focus is on the navigation domain and understanding if navigationemerges as a bi-product of solving an RL problem, while Jaderberg et al. (2017) is concerned withdata efficiency for any RL-task.Whilst our best performing agents are relatively successful at navigation, their abilities would bestretched if larger demands were placed on rapid memory (e.g. in procedurally generated mazes),due to the limited capacity of the stacked LSTM in this regard. It will be important in the future tocombine visually complex environments with architectures that make use of external memory (Graveset al., 2016; Weston et al., 2014; Olton et al., 1979) to enhance the navigational abilities of agents.Further, whilst this work has focused on investigating the benefits of auxiliary tasks for developingthe ability to navigate through end-to-end deep reinforcement learning, it would be interesting forfuture work to compare these techniques with SLAM-based approaches.ACKNOWLEDGEMENTS9Published as a conference paper at ICLR 2017We would like to thank Alexander Pritzel, Thomas Degris and Joseph Modayil for useful discussions,Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environment designand development, and Amir Sadik and Sarah York for expert human game testing.REFERENCESTrevor Barron, Matthew Whitehead, and Alan Yeung. Deep reinforcement learning in a 3-d block-world environment. In Deep Reinforcement Learning: Frontiers and Challenges, IJCAI , 2016.Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich KÃijttler,Andrew Lefrancq, Simon Green, Victor Valdes, Amir Sadik, Julian Schrittwieser, Keith Anderson,Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis,Shane Legg, and Stig Petersen. Deepmind lab. In arXiv , 2016. URL https://arxiv.org/abs/1612.03801 .MWM Gamini Dissanayake, Paul Newman, Steve Clark, Hugh F. Durrant-Whyte, and MichaelCsorba. A solution to the simultaneous localization and map building (slam) problem. IEEETransactions on Robotics and Automation , 17(3):229–241, 2001.David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using amulti-scale deep network. In Proc. of Neural Information Processing Systems, NIPS , 2014.Alex Graves, Mohamed Abdelrahman, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In Proceedings of the International Conference on Acoustics, Speech and SignalProcessing, ICASSP , 2013.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al.Hybrid computing using a neural network with dynamic external memory. Nature , 2016.Matthew J. Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps.Proc. of Conf. on Artificial Intelligence, AAAI , 2015.Max Jaderberg, V olodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, andKoray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Submitted toInt’l Conference on Learning Representations, ICLR , 2017.Jan Koutnik, Giuseppe Cuccu, JÃijrgen Schmidhuber, and Faustino Gomez. Evolving large-scaleneural networks for vision-based reinforcement learning. In Proceedings of the 15th annualconference on Genetic and evolutionary computation, GECCO , 2013.Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successorreinforcement learning. CoRR , abs/1606.02396, 2016. URL http://arxiv.org/abs/1606.02396 .Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcementlearning. CoRR , 2016. URL http://arxiv.org/abs/1609.05521 .Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrentreinforcement learning: A hybrid approach. In Proceedings of the International Conference onLearning Representations, ICLR , 2016. URL https://arxiv.org/abs/1509.03044 .Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Piotr Mirowski, Marc’Aurelio Ranzato, and Yann LeCun. Dynamic auto-encoders for semanticindexing. In NIPS Deep Learning and Unsupervised Learning Workshop , 2010.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518:529–533, 2015.V olodymyr Mnih, Adrià ̆a Puigdomà ́lnech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. In Proc. of Int’l Conf. on Machine Learning, ICML , 2016.10Published as a conference paper at ICLR 2017Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, et al. Massivelyparallel methods for deep reinforcement learning. In Proceedings of the International Conferenceon Machine Learning Deep Learning Workshop, ICML , 2015.Karthik Narasimhan, Tejas D. Kulkarni, and Regina Barzilay. Language understanding for text-basedgames using deep reinforcement learning. In Proc. of Empirical Methods in Natural LanguageProcessing, EMNLP , 2015.Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory,active perception, and action in minecraft. In Proc. of International Conference on MachineLearning, ICML , 2016.David S Olton, James T Becker, and Gail E Handelmann. Hippocampus, space, and memory.Behavioral and Brain Sciences , 2(03):313–322, 1979.Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deeprecurrent neural networks. arXiv preprint arXiv:1312.6026 , 2013.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervisedlearning with ladder networks. In Advances in Neural Information Processing Systems, NIPS ,2015.Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving networkperformance and learning time. In Neural Networks , pp. 120–129. Springer, 1990.Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A frameworkfor temporal abstraction in reinforcement learning. Artificial intelligence , 112(1):181–211, 1999.Lei Tai and Ming Liu. Towards cognitive exploration through deep reinforcement learning for mobilerobots. In arXiv , 2016. URL https://arxiv.org/abs/1610.01733 .Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deephierarchical approach to lifelong learning in minecraft. CoRR , abs/1604.07255, 2016. URLhttp://arxiv.org/abs/1604.07255 .Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5 – rmsprop: Divide the gradient by a runningaverage of its recent magnitude. In Coursera: Neural Networks for Machine Learning , volume 4,2012.A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. 2016.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprintarXiv:1410.3916 , 2014.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In Proc. of International Conference onMachine Learning, ICML , 2016.Junbo Zhao, Michaël Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders.Int’l Conf. on Learning Representations (Workshop), ICLR , 2015. URL http://arxiv.org/abs/1506.02351 .Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and AliFarhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning.CoRR , abs/1609.05143, 2016. URL http://arxiv.org/abs/1609.05143 .11Published as a conference paper at ICLR 2017Supplementary MaterialA V IDEOS OF TRAINED NAVIGATION AGENTSWe show the behaviour of Nav A3C*+ D1Lagent in 5 videos, corresponding to the 5 navigationenvironments: I-maze5, (small) static maze6, (large) static maze7, (small) random goal maze8and(large) random goal maze9. Each video shows a high-resolution video (the actual inputs to the agentare down-sampled to 84 84 RGB images), the value function over time (with fruit reward and goalacquisitions), the layout of the mazes with consecutive trajectories of the agent marked in differentcolours and the output of the trained position decoder, overlayed on top of the maze layout.B N ETWORK ARCHITECTURE AND TRAININGB.1 T HE ONLINE MULTI -LEARNER ALGORITHM FOR MULTI -TASK LEARNINGWe introduce a class of neural network-based agents that have modular structures and that are trainedon multiple tasks, with inputs coming from different modalities (vision, depth, past rewards and pastactions). Implementing our agent architecture is simplified by its modular nature. Essentially, weconstruct multiple networks, one per task, using shared building blocks, and optimise these networksjointly. Some modules, such as the conv-net used for perceiving visual inputs, or the LSTMs used forlearning the navigation policy, are shared among multiple tasks, while other modules, such as depthpredictorgdor loop closure predictor gl, are task-specific. The navigation network that outputs thepolicy and the value function is trained using reinforcement learning, while the depth prediction andloop closure prediction networks are trained using self-supervised learning.Within each thread of the asynchronous training environment, the agent plays on its own episode ofthe game environment, and therefore sees observation and reward pairs f(st;rt)gand takes actionsthat are different from those experienced by agents from the other, parallel threads. Within a thread,the multiple tasks (navigation, depth and loop closure prediction) can be trained at their own schedule,and they add gradients to the shared parameter vector as they arrive. Within each thread, we use aflag-based system to subordinate gradient updates to the A3C reinforcement learning procedure.B.2 N ETWORK AND TRAINING DETAILSFor all the experiments we use an encoder model with 2 convolutional layers followed by a fullyconnected layer, or recurrent layer(s), from which we predict the policy and value function. Thearchitecture is similar to the one in (Mnih et al., 2016). The convolutional layers are as follows. Thefirst convolutional layer has a kernel of size 8x8 and a stride of 4x4, and 16 feature maps. The secondlayer has a kernel of size 4x4 and a stride of 2x2, and 32 feature maps. The fully connected layer,in the FF A3C architecture in Figure 2a has 256 hidden units (and outputs visual features ft). TheLSTM in the LSTM A3C architecture has 256 hidden units (and outputs LSTM hidden activations ht).The LSTMs in Figure 2c and 2d are fed extra inputs (past reward rt1, previous action atexpressedas a one-hot vector of dimension 8 and agent-relative lateral and rotational velocity vtencoded by a6-dimensional vector), which are all concatenated to vector ft. The Nav A3C architectures (Figure2c,d) have a first LSTM with 64 or 128 hiddens and a second LSTM with 256 hiddens. The depthpredictor modules gd,g0dand the loop closure detection module glare all single-layer MLPs with 128hidden units. The depth MLPs are followed by 64 independent 8-dimensional softmax outputs (oneper depth pixel). The loop closure MLP is followed by a 2-dimensional softmax output. We illustrateon Figure 8 the architecture of the Nav A3C+D+L+Dr agent.Depth is taken as the Z-buffer from the Labyrinth environment (with values between 0 and 255),divided by 255 and taken to power 10 to spread the values in interval [0;1]. We empirically decidedto use the following quantization: f0;0:05;0:175;0:3;0:425;0:55;0:675;0:8;1gto ensure a uniform5Video of the Nav A3C*+ D1Lagent on the I-maze: https://youtu.be/PS4iJ7Hk_BU6Video of the Nav A3C*+ D1Lagent on static maze 1: https://youtu.be/-HsjQoIou_c7Video of the Nav A3C*+ D1Lagent on static maze 2: https://youtu.be/kH1AvRAYkbI8Video of the Nav A3C*+ D1Lagent on random goal maze 1: https://youtu.be/5IBT2UADJY09Video of the Nav A3C*+ D1Lagent on random goal maze 2: https://youtu.be/e10mXgBG9yo1Published as a conference paper at ICLR 2017168x8/4x4384x84324x4/2x2256128128264x86425616881xtvtat1rt1⇡Vfthtgl(ht)gd(ft)12864x8gd(ft)gl(ht)’ Figure 8: Details of the architecture of the Nav A3C+D+L+Dr agent, taking in RGB visual inputs xt, pastrewardrt1, previous action at1as well as agent-relative velocity vt, and producing policy , value functionV, depth predictions gd(ft)andg0d(ht)as well as loop closure detection gl(ht).binning across 8 classes. The previous version of the agent had a single depth prediction MLP gdforregressing 816 = 128 depth pixels from the convnet outputs ft.The parameters of each of the modules point to a subset of a common vector of parameters. Weoptimise these parameters using an asynchronous version of RMSProp (Tieleman & Hinton, 2012).(Nair et al., 2015) was a recent example of asynchronous and parallel gradient updates in deepreinforcement learning; in our case, we focus on the specific Asynchronous Advantage Actor Critic(A3C) reinforcement learning procedure in (Mnih et al., 2016).Learning follows closely the paradigm described in (Mnih et al., 2016). We use 16 workers and thesame RMSProp algorithm without momentum or centering of the variance. Gradients are computedover non-overlaping chunks of the episode. The score for each point of a training curve is the averageover all the episodes the model gets to finish in 5e4environment steps.The whole experiments are run for a maximum of 1e8environment step. The agent has an actionrepeat of 4 as in (Mnih et al., 2016), which means that for 4 consecutive steps the agent will use thesame action picked at the beginning of the series. For this reason through out the paper we actuallyreport results in terms of agent perceived steps rather than environment steps. That is, the maximalnumber of agent perceived step that we do for any particular run is 2:5e7.In our grid we sample hyper-parameters from categorical distributions:Learning rate was sampled from [104;5104].Strength of the entropy regularization from [104;103].Rewards were not scaled and not clipped in the new set of experiments. In our previous setof experiments, rewards were scaled by a factor from f0:3;0:5gand clipped to 1 prior toback-propagation in the Advantage Actor-Critic algorithm.Gradients are computed over non-overlaping chunks of 50 or 75 steps of the episode. In ourprevious set of experiments, we used chunks of 100 steps.The auxiliary tasks, when used, have hyperparameters sampled from:Coefficient dof the depth prediction loss from convnet features Ldsampled fromf3:33;10;33g.Coefficient 0dof the depth prediction loss from LSTM hiddens Ld0sampled fromf1;3:33;10g.Coefficientlof the loop closure prediction loss Llsampled fromf1;3:33;10g.Loop closure uses the following thresholds: maximum distance for position similarity 1= 1squareand minimum distance for removing trivial loop-closures 2= 2squares.2Published as a conference paper at ICLR 2017(a)Random Goal maze (small): comparison of reward clipping (b)Random Goal maze (small): comparison of depth predictionFigure 9: Results are averaged over the top 5 random hyperparameters for each agent-task configuration. Star inthe label indicates the use of reward clipping. Please see text for more details.C A DDITIONAL RESULTSC.1 R EWARD CLIPPINGFigure 9 shows additional learning curves. In particular in the left plot we show that the baselines(A3C FF and A3C LSTM) as well as Nav A3C agent without auxiliary losses, perform worse withoutreward clipping than with reward clipping. It seems that removing reward clipping makes learningunstable in absence of auxiliary tasks. For this particular reason we chose to show the baselines withreward clipping in our main results.C.2 D EPTH PREDICTION AS REGRESSION OR CLASSIFICATION TASKSThe right subplot of Figure 9 compares having depth as an input versus as a target. Note that usingRGBD inputs to the Nav A3C agent performs even worse than predicting depth as a regression task,and in general is worse than predicting depth as a classification task.C.3 N ON-NAVIGATION TASKS IN 3D MAZE ENVIRONMENTSWe have evaluated the behaviour of the agents introduced in this paper, as well as agents withreward prediction, introduced in (Jaderberg et al., 2017) (Nav A3C*+ R) and with a combination ofreward prediction from the convnet and depth prediction from the policy LSTM (Nav A3C+ RD 2),on different 3D maze environments with non-navigation specific tasks. In the first environment,Seek-Avoid Arena, there are apples (yielding 1 point) and lemons (yielding -1 point) disposed inan arena, and the agents needs to pick all the apples before respawning; episodes last 20 seconds.The second environment, Stairway to Melon, is a thin square corridor; in one direction, there is alemon followed by a stairway to a melon (10 points, resets the level) and in the other direction are7 apples and a dead end, with the melon visible but not reachable. The agent spawns between thelemon and the apples with a random orientation. Both environments have been released in DeepMindLab (Beattie et al., 2016). These environments do not require navigation skills such as shortest pathplanning, but a simple reward identification (lemon vs. apple or melon) and persistent exploration.As Figure 10 shows, there is no major difference between auxiliary tasks related to depth predictionor reward prediction. Depth prediction boosts the performance of the agent beyond that of the stackedLSTM architecture, hinting at a more general applicability of depth prediction beyond navigationtasks.C.4 S ENSITIVITY TOWARDS HYPER -PARAMETER SAMPLINGFor each of the experiments in this paper, 64 replicas were run with hyperparameters (learning rate,entropy cost) sampled from the same interval. Figure 11 shows that the Nav architectures with3Published as a conference paper at ICLR 2017(a)Seek-Avoid (learning curves) (b)Stairway to Melon (learning curves)(c)Seek-Avoid (layout) (d)Stairway to Melon (layout)Figure 10: Comparison of agent architectures over non-navigation maze configurations, Seek-Avoid Arena andStairway to Melon, described in details in (Beattie et al., 2016). Image credits for (c) and (d): (Jaderberg et al.,2017).(a)Static maze (small) (b)Random Goal maze (large) (c)Random Goal I-mazeFigure 11: Plot of the Area Under the Curve (AUC) of the rewards achieved by the agents, across differentexperiments and on 3 different tasks: large static maze with fixed goals, large static maze with comparable layoutbut with dynamic goals, and the I-maze. The reward AUC values are computed for each replica; 64 replicaswere run per experiment and the reward AUC values are sorted by decreasing value.auxiliary tasks achieve higher results for a comparatively larger number of replicas, hinting at the factthat auxiliary tasks make learning more robust to the choice of hyperparameters.C.5 A SYMPTOTIC PERFORMANCE OF THE AGENTSFinally, we compared the asymptotic performance of the agents, both in terms of navigation (finalrewards obtained at the end of the episode) and in terms of their representation in the policy LSTM.Rather than visualising the convolutional filters, we quantify the change in representation, with and4Published as a conference paper at ICLR 2017Agent architectureFrames Performance LSTM A3C* Nav A3C+ D2120M Score (mean top 5) 57 103Position Acc 33.4 72.4240M Score (mean top 5) 90 114Position Acc 64.1 80.6Table 3: Asymptotic performance analysis of two agents in the Random Goal 2 maze, comparing training for120M Labyrinth frames vs. 240M frames.without auxiliary task, in terms of position decoding, following the approach explained in Section 5.1.Specifically, we compare the baseline agent (LSTM A3C*) to a navigation agent with one auxiliarytask (depth prediction), that gets about twice as many gradient updates for the same number of framesseen in the environment: once for the RL task and once for the auxiliary depth prediction task. AsTable 3 shows, the performance of the baseline agent as well as the position decoding accuracy dosignificantly increase after twice the number of training steps (going from 57 points to 90 points, andfrom 33.4% to 66.5%, but do not reach the performance and position decoding accuracy of the NavA3C+D2agent after half the number of training frames. For this reason, we believe that the auxiliarytask do more than simply accelerate training.5
rkeTNeMNg
SJMGPrcle
ICLR.cc/2017/conference/-/paper254/official/review
{"title": "well presented, convincing, but of limited novelty", "rating": "7: Good paper, accept", "review": "This paper shows that a deep RL approach augmented with auxiliary tasks improves performance on navigation in complex environments. Specifically, A3C is used for the RL problem, and the agent is simultaneously trained on an unsupervised depth prediction task and a self-supervised loop closure classification task. While the use of auxiliary tasks to improve training of models including RL agents is not new, the main contribution here is the use of tasks that encourage learning an intrinsic representation of space and movement that enables significant improvements on maze navigation tasks.\n\nThe paper is well written, experiments are convincing, and the value of the auxiliary tasks for the problem are clear. However, the contribution is relatively incremental given previous work on RL for navigation and on auxiliary tasks. The work could become of greater interest provided broader analysis and insights on either optimal combinations of tasks for visual navigation (e.g. the value of other visual / geometry-based tasks), or on auxiliary tasks with RL in general. As it is, it is a useful demonstration of the benefit of geometry-based auxiliary tasks for navigation, but of relatively narrow interest.", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Learning to Navigate in Complex Environments
["Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andy Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu", "Dharshan Kumaran", "Raia Hadsell"]
Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision.
["Deep learning", "Reinforcement Learning"]
https://openreview.net/forum?id=SJMGPrcle
https://openreview.net/pdf?id=SJMGPrcle
https://openreview.net/forum?id=SJMGPrcle&noteId=rkeTNeMNg
Published as a conference paper at ICLR 2017LEARNING TO NAVIGATEINCOMPLEX ENVIRONMENTSPiotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard,Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu,Dharshan Kumaran, Raia HadsellDeepMindLondon, UK{piotrmirowski, razp, fviola, soyer, aybd, abanino, mdenil, goroshin, sifre,korayk, dkumaran, raia} @google.comABSTRACTLearning to navigate in complex environments with dynamic elements is an impor-tant milestone in developing AI agents. In this work we formulate the navigationquestion as a reinforcement learning problem and show that data efficiency and taskperformance can be dramatically improved by relying on additional auxiliary tasksleveraging multimodal sensory inputs. In particular we consider jointly learningthe goal-driven reinforcement learning problem with auxiliary depth predictionand loop closure classification tasks. This approach can learn to navigate from rawsensory input in complicated 3D mazes, approaching human-level performanceeven under conditions where the goal location changes frequently. We providedetailed analysis of the agent behaviour1, its ability to localise, and its networkactivity dynamics, showing that the agent implicitly learns key navigation abilities.1 I NTRODUCTIONThe ability to navigate efficiently within an environment is fundamental to intelligent behavior.Whilst conventional robotics methods, such as Simultaneous Localisation and Mapping (SLAM),tackle navigation through an explicit focus on position inference and mapping (Dissanayake et al.,2001), here we follow recent work in deep reinforcement learning (Mnih et al., 2015; 2016) andpropose that navigational abilities could emerge as the by-product of an agent learning a policythat maximizes reward. One advantage of an intrinsic, end-to-end approach is that actions are notdivorced from representation, but rather learnt together, thus ensuring that task-relevant features arepresent in the representation. Learning to navigate from reinforcement learning in partially observableenvironments, however, poses several challenges.First, rewards are often sparsely distributed in the environment, where there may be only one goallocation. Second, environments often comprise dynamic elements, requiring the agent to use memoryat different timescales: rapid one-shot memory for the goal location, together with short term memorysubserving temporal integration of velocity signals and visual observations, and longer term memoryfor constant aspects of the environment (e.g. boundaries, cues).To improve statistical efficiency we bootstrap the reinforcement learning procedure by augmentingour loss with auxiliary tasks that provide denser training signals that support navigation-relevantrepresentation learning. We consider two additional losses: the first one involves reconstruction of alow-dimensional depth map at each time step by predicting one input modality (the depth channel)from others (the colour channels). This auxiliary task concerns the 3D geometry of the environment,and is aimed to encourage the learning of representations that aid obstacle avoidance and short-termtrajectory planning. The second task directly invokes loop closure from SLAM: the agent is trainedto predict if the current location has been previously visited within a local trajectory.Denotes equal contribution1A video illustrating the navigation agents is available at: https://youtu.be/lNoaTyMZsWI1Published as a conference paper at ICLR 2017Figure 1: Views from a small 510maze, a large 915maze and an I-maze, with corresponding maze layoutsand sample agent trajectories. The mazes, which will be made public, have different textures and visual cues aswell as exploration rewards and goals (shown right).To address the memory requirements of the task we rely on a stacked LSTM architecture (Graveset al., 2013; Pascanu et al., 2013). We evaluate our approach using five 3D maze environments anddemonstrate the accelerated learning and increased performance of the proposed agent architecture.These environments feature complex geometry, random start position and orientation, dynamic goallocations, and long episodes that require thousands of agent steps (see Figure 1). We also providedetailed analysis of the trained agent to show that critical navigation skills are acquired. This isimportant as neither position inference nor mapping are directly part of the loss; therefore, rawperformance on the goal finding task is not necessarily a good indication that these skills are acquired.In particular, we show that the proposed agent resolves ambiguous observations and quickly localizesitself in a complex maze, and that this localization capability is correlated with higher task reward.2 A PPROACHWe rely on a end-to-end learning framework that incorporates multiple objectives. Firstly it tries tomaximize cumulative reward using an actor-critic approach. Secondly it minimizes an auxiliary lossof inferring the depth map from the RGB observation. Finally, the agent is trained to detect loopclosures as an additional auxiliary task that encourages implicit velocity integration.The reinforcement learning problem is addressed with the Asynchronous Advantage Actor-Critic(A3C) algorithm (Mnih et al., 2016) that relies on learning both a policy (atjst;)and value functionV(st;V)given a state observation st. Both the policy and value function share all intermediaterepresentations, both being computed using a separate linear layer from the topmost layer of themodel. The agent setup closely follows the work of (Mnih et al., 2016) and we refer to this work forthe details (e.g. the use of a convolutional encoder followed by either an MLP or an LSTM, the useof action repetition, entropy regularization to prevent the policy saturation, etc.). These details canalso be found in the Appendix B.The baseline that we consider in this work is an A3C agent (Mnih et al., 2016) that receives only RGBinput from the environment, using either a recurrent or a purely feed-forward model (see Figure 2a,b).The encoder for the RGB input (used in all other considered architectures) is a 3 layer convolutionalnetwork. To support the navigation capability of our approach, we also rely on the Nav A3C agent(Figure 2c) which employs a two-layer stacked LSTM after the convolutional encoder. We expand theobservations of the agents to include agent-relative velocity, the action sampled from the stochasticpolicy and the immediate reward, from the previous time step. We opt to feed the velocity andpreviously selected action directly to the second recurrent layer, with the first layer only receiving thereward. We postulate that the first layer might be able to make associations between reward and visualobservations that are provided as context to the second layer from which the policy is computed.Thus, the observation stmay include an image xt2R3WH(whereWandHare the width and2Published as a conference paper at ICLR 2017xt rt-1 { vt, at-1}encρᬭxtencρᬭencρᬭLoop (L)Depth (D1 )a. FF A3C c. Nav A3C d. Nav A3C +D1D2Lxt rt-1 { vt, at-1}encρᬭxtb. LSTM A3C Depth (D2 )Figure 2: Different architectures: (a) is a convolutional encoder followed by a feedforward layer and policy ( )and value function outputs; (b) has an LSTM layer; (c) uses additional inputs (agent-relative velocity, reward,and action), as well as a stacked LSTM; and (d) has additional outputs to predict depth and loop closures.height of the image), the agent-relative lateral and rotational velocity vt2R6, the previous actionat12RNA, and the previous reward rt12R.Figure 2d shows the augmentation of the Nav A3C with the different possible auxiliary losses. Inparticular we consider predicting depth from the convolutional layer (we will refer to this choiceasD1), or from the top LSTM layer ( D2) or predicting loop closure ( L). The auxiliary losses arecomputed on the current frame via a single layer MLP. The agent is trained by applying a weightedsum of the gradients coming from A3C, the gradients from depth prediction (multiplied with d1;d2)and the gradients from the loop closure (scaled by l). More details of the online learning algorithmare given in Appendix B.2.1 D EPTH PREDICTIONThe primary input to the agent is in the form of RGB images. However, depth information, coveringthe central field of view of the agent, might supply valuable information about the 3D structure ofthe environment. While depth could be directly used as an input, we argue that if presented as anadditional loss it is actually more valuable to the learning process. In particular if the predictionloss shares representation with the policy, it could help build useful features for RL much faster,bootstrapping learning. Since we know from (Eigen et al., 2014) that a single frame can be enough topredict depth, we know this auxiliary task can be learnt. A comparison between having depth as inputversus as an additional loss is given in Appendix C, which shows significant gain for depth as a loss.Since the role of the auxiliary loss is just to build up the representation of the model, we do notnecessarily care about the specific performance obtained or nature of the prediction. We do careabout the data efficiency aspect of the problem and also computational complexity. If the loss is to beuseful for the main task, we should converge faster on it compared to solving the RL problem (usingless data samples), and the additional computational cost should be minimal. To achieve this we usea low resolution variant of the depth map, reducing the screen resolution to 4x16 pixels2.We explore two different variants for the loss. The first choice is to phrase it as a regression task, themost natural choice. While this formulation, combined with a higher depth resolution, extracts themost information, mean square error imposes a unimodal distribution (van den Oord et al., 2016).To address this possible issue, we also consider a classification loss, where depth at each positionis discretised into 8 different bands. The bands are non-uniformally distributed such that we paymore attention to far-away objects (details in Appendix B). The motivation for the classificationformulation is that while it greatly reduces the resolution of depth, it is more flexible from a learningperspective and can result in faster convergence (hence faster bootstrapping).2The image is cropped before being subsampled to lessen the floor and ceiling which have little relevantdepth information.3Published as a conference paper at ICLR 20172.2 L OOP CLOSURE PREDICTIONLoop closure, like depth, is valuable for a navigating agent, since can be used for efficient explorationand spatial reasoning. To produce the training targets, we detect loop closures based on the similarityof local position information during an episode, which is obtained by integrating 2D velocity overtime. Specifically, in a trajectory noted fp0;p1;:::;p Tg, whereptis the position of the agent at timet, we define a loop closure label ltthat is equal to 1 if the position ptof the agent is close to thepositionpt0at an earlier time t0. In order to avoid trivial loop closures on consecutive points of thetrajectory, we add an extra condition on an intermediary position pt00being far from pt. Thresholds 1and2provide these two limits. Learning to predict the binary loop label is done by minimizing theBernoulli lossLlbetweenltand the output of a single-layer output from the hidden representation htof the last hidden layer of the model, followed by a sigmoid activation.3 R ELATED WORKThere is a rich literature on navigation, primarily in the robotics literature. However, here we focus onrelated work in deep RL. Deep Q-networks (DQN) have had breakthroughs in extremely challengingdomains such as Atari (Mnih et al., 2015). Recent work has developed on-policy RL methods suchas advantage actor-critic that use asynchronous training of multiple agents in parallel (Mnih et al.,2016). Recurrent networks have also been successfully incorporated to enable state disambiguationin partially observable environments (Koutnik et al., 2013; Hausknecht & Stone, 2015; Mnih et al.,2016; Narasimhan et al., 2015).Deep RL has recently been used in the navigation domain. Kulkarni et al. (2016) used a feedforwardarchitecture to learn deep successor representations that enabled behavioral flexibility to rewardchanges in the MazeBase gridworld, and provided a means to detect bottlenecks in 3D VizDoom.Zhu et al. (2016) used a feedforward siamese actor-critic architecture incorporating a pretrainedResNet to support navigation to a target in a discretised 3D environment. Oh et al. (2016) investigatedthe performance of a variety of networks with external memory (Weston et al., 2014) on simplenavigation tasks in the Minecraft 3D block world environment. Tessler et al. (2016) also used theMinecraft domain to show the benefit of combining feedforward deep-Q networks with the learningof resuable skill modules (cf options: (Sutton et al., 1999)) to transfer between navigation tasks. Tai &Liu (2016) trained a convnet DQN-based agent using depth channel inputs for obstacle avoidance in3D environments. Barron et al. (2016) investigated how well a convnet can predict the depth channelfrom RGB in the Minecraft environment, but did not use depth for training the agent.Auxiliary tasks have often been used to facilitate representation learning (Suddarth & Kergosien,1990). Recently, the incorporation of additional objectives, designed to augment representationlearning through auxiliary reconstructive decoding pathways (Zhang et al., 2016; Rasmus et al., 2015;Zhao et al., 2015; Mirowski et al., 2010), has yielded benefits in large scale classification tasks. Indeep RL settings, however, only two previous papers have examined the benefit of auxiliary tasks.Specifically, Li et al. (2016) consider a supervised loss for fitting a recurrent model on the hiddenrepresentations to predict the next observed state, in the context of imitation learning of sequencesprovided by experts, and Lample & Chaplot (2016) show that the performance of a DQN agent in afirst-person shooter game in the VizDoom environment can be substantially enhanced by the additionof a supervised auxiliary task, whereby the convolutional network was trained on an enemy-detectiontask, with information about the presence of enemies, weapons, etc., provided by the game engine.In contrast, our contribution addresses fundamental questions of how to learn an intrinsic repre-sentation of space, geometry, and movement while simultaneously maximising rewards throughreinforcement learning. Our method is validated in challenging maze domains with random start andgoal locations.4 E XPERIMENTSWe consider a set of first-person 3D mazes from the DeepMind Lab environment (Beattie et al., 2016)(see Fig. 1) that are visually rich, with additional observations available to the agent such as inertial4Published as a conference paper at ICLR 2017(a)Static maze (small) (b)Static maze (large) (c)Random Goal I-maze(d)Random Goal maze (small) (e)Random Goal maze (large) (f)Random Goal maze (large): different formu-lation of depth predictionFigure 3: Rewards achieved by the agents on 5 different tasks: two static mazes (small and large) with fixedgoals, two static mazes with comparable layout but with dynamic goals and the I-maze. Results are averagedover the top 5 random hyperparameters for each agent-task configuration. Star in the label indicates the use ofreward clipping. Please see text for more details.information and local depth information.3The action space is discrete, yet allows finegrained control,comprising 8 actions: the agent can rotate in small increments, accelerate forward or backward orsideways, or induce rotational acceleration while moving. Reward is achieved in these environmentsby reaching a goal from a random start location and orientation. If the goal is reached, the agent isrespawned to a new start location and must return to the goal. The episode terminates when a fixedamount of time expires, affording the agent enough time to find the goal several times. There aresparse ‘fruit’ rewards which serve to encourage exploration. Apples are worth 1 point, strawberries 2points and goals are 10 points. Videos of the agent solving the maze are linked in Appendix A.In the static variant of the maze, the goal and fruit locations are fixed and only the agent’s startlocation changes. In the dynamic (Random Goal) variant, the goal and fruits are randomly placed onevery episode. Within an episode, the goal and apple locations stay fixed until the episode ends. Thisencourages an explore-exploit strategy, where the agent should initially explore the maze, then retainthe goal location and quickly refind it after each respawn. For both variants (static and random goal)we consider a small and large map. The small mazes are 510and episodes last for 3600 timesteps,and the large mazes are 915with 10800 steps (see Figure 1). The RGB observation is 8484.The I-Maze environment (see Figure 1, right) is inspired by the classic T-maze used to investigatenavigation in rodents (Olton et al., 1979): the layout remains fixed throughout, the agent spawns inthe central corridor where there are apple rewards and has to locate the goal which is placed in thealcove of one of the four arms. Because the goal is hidden in the alcove, the optimal agent behaviourmust rely on memory of the goal location in order to return to the goal using the most direct route.Goal location is constant within an episode but varies randomly across episodes.The different agent architectures described in Section 2 are evaluated by training on the five mazes.Figure 3 shows learning curves (averaged over the 5 top performing agents). The agents are afeedforward model (FF A3C), a recurrent model (LSTM A3C), the stacked LSTM version withvelocity, previous action and reward as input (Nav A3C), and Nav A3C with depth prediction fromthe convolution layer (Nav A3C+ D1), Nav A3C with depth prediction from the last LSTM layer(Nav A3C+D2), Nav A3C with loop closure prediction (Nav A3C+ L) as well as the Nav A3C with3The environments used in this paper are publicly available at https://github.com/deepmind/lab .5Published as a conference paper at ICLR 2017Figure 4: left: Example of depth predictions (pairs of ground truth and predicted depths), sampled every 40 steps.right: Example of loop closure prediction. The agent starts at the gray square and the trajectory is plotted ingray. Blue dots correspond to true positive outputs of the loop closure detector; red cross correspond to falsepositives and green cross to false negatives. Note the false positives that occur when the agent is actually a fewsquares away from actual loop closure.all auxiliary losses considered together (Nav A3C+ D1D2L). In each case we ran 64 experimentswith randomly sampled hyper-parameters (for ranges and details please see the appendix). The meanover the top 5 runs as well as the top 5 curves are plotted. Expert human scores, established by aprofessional game player, are compared to these results. The Nav A3C+ D2agents reach human-levelperformance on Static 1 and 2, and attain about 91% and 59% of human scores on Random Goal 1and 2.In Mnih et al. (2015) reward clipping is used to stabilize learning, technique which we employed inthis work as well. Unfortunately, for these particular tasks, this yields slightly suboptimal policiesbecause the agent does not distinguish apples (1 point) from goals (10 points). Removing the rewardclipping results in unstable behaviour for the base A3C agent (see Appendix C). However it seemsthat the auxiliary signal from depth prediction mediates this problem to some extent, resulting instable learning dynamics (e.g. Figure 3f, Nav A3C+ D1vs Nav A3C*+ D1). We clearly indicatewhether reward clipping is used by adding an asterisk to the agent name.Figure 3f also explores the difference between the two formulations of depth prediction, as a regressiontask or a classification task. We can see that the regression agent (Nav A3C*+ D1[MSE]) performsworse than one that does classification (Nav A3C*+ D1). This result extends to other maps, andwe therefore only use the classification formulation in all our other results4. Also we see thatpredicting depth from the last LSTM layer (hence providing structure to the recurrent layer, not justthe convolutional ones) performs better.We note some particular results from these learning curves. In Figure 3 (a and b), consider thefeedforward A3C model (red curve) versus the LSTM version (pink curve). Even though navigationseems to intrinsically require memory, as single observations could often be ambiguous, the feed-forward model achieves competitive performance on static mazes. This suggest that there might begood strategies that do not involve temporal memory and give good results, namely a reactive policyheld by the weights of the encoder, or learning a wall-following strategy. This motivates the dynamicenvironments that encourage the use of memory and more general navigation strategies.Figure 3 also shows the advantage of adding velocity, reward and action as an input, as well as theimpact of using a two layer LSTM (orange curve vs red and pink). Though this agent (Nav A3C)is better than the simple architectures, it is still relatively slow to train on all of the mazes. Webelieve that this is mainly due to the slower, data inefficient learning that is generally seen in pureRL approaches. Supporting this we see that adding the auxiliary prediction targets of depth andloop closure (Nav A3C+ D1D2L, black curve) speeds up learning dramatically on most of the mazes(see Table 1: AUC metric). It has the strongest effect on the static mazes because of the acceleratedlearning, but also gives a substantial and lasting performance increase on the random goal mazes.Although we place more value on the task performance than on the auxiliary losses, we report theresults from the loop closure prediction task. Over 100 test episodes of 2250 steps each, within alarge maze (random goal 2), the Nav A3C*+ D1Lagent demonstrated very successful loop detection,reaching an F-1 score of 0.83. A sample trajectory can be seen in Figure 4 (right).4An exception is the Nav A3C*+ D1Lagent on the I-maze (Figure 3c), which uses depth regression andreward clipping. While it does worse, we include it because some analysis is based on this agent.6Published as a conference paper at ICLR 2017Mean over top 5 agents Highest reward agentMaze Agent AUC Score % Human Goals Position Acc Latency 1:>1 ScoreI-Maze FF A3C* 75.5 98 - 94/100 42.2 9.3s:9.0s 102LSTM A3C* 112.4 244 - 100/100 87.8 15.3s:3.2s 203Nav A3C*+ D1L 169.7 266 - 100/100 68.5 10.7s:2.7s 252Nav A3C+ D2 203.5 268 - 100/100 62.3 8.8s:2.5s 269Nav A3C+ D1D2L 199.9 258 - 100/100 61.0 9.9s:2.5s 251Static 1 FF A3C* 41.3 79 83 100/100 64.3 8.8s:8.7s 84LSTM A3C* 44.3 98 103 100/100 88.6 6.1s:5.9s 110Nav A3C+ D2 104.3 119 125 100/100 95.4 5.9s:5.4s 122Nav A3C+ D1D2L 102.3 116 122 100/100 94.5 5.9s:5.4s 123Static 2 FF A3C* 35.8 81 47 100/100 55.6 24.2s:22.9s 111LSTM A3C* 46.0 153 91 100/100 80.4 15.5s:14.9s 155Nav A3C+ D2 157.6 200 116 100/100 94.0 10.9s:11.0s 202Nav A3C+ D1D2L 156.1 192 112 100/100 92.6 11.1s:12.0s 192Random Goal 1 FF A3C* 37.5 61 57.5 88/100 51.8 11.0:9.9s 64LSTM A3C* 46.6 65 61.3 85/100 51.1 11.1s:9.2s 66Nav A3C+ D2 71.1 96 91 100/100 85.5 14.0s:7.1s 91Nav A3C+ D1D2L 64.2 81 76 81/100 83.7 11.5s:7.2s 74.6Random Goal 2 FF A3C* 50.0 69 40.1 93/100 30.0 27.3s:28.2s 77LSTM A3C* 37.5 57 32.6 74/100 33.4 21.5s:29.7s 51.3Nav A3C*+ D1L 62.5 90 52.3 90/100 51.0 17.9s:18.4s 106Nav A3C+ D2 82.1 103 59 79/100 72.4 15.4s:15.0s 109Nav A3C+ D1D2L 78.5 91 53 74/100 81.5 15.9s:16.0s 102Table 1: Comparison of four agent architectures over five maze configurations, including random and staticgoals. AUC (Area under learning curve), Score , and % Human are averaged over the best 5 hyperparameters.Evaluation of a single best performing agent is done through analysis on 100 test episodes. Goals gives thenumber of episodes where the goal was reached one more more times. Position Accuracy is the classificationaccuracy of the position decoder. Latency 1:>1 is the average time to the first goal acquisition vs. the averagetime to all subsequent goal acquisitions. Score is the mean score over the 100 test episodes.5 A NALYSIS5.1 P OSITION DECODINGIn order to evaluate the internal representation of location within the agent (either in the hidden unitshtof the last LSTM, or, in the case of the FF A3C agent, in the features fton the last layer of theconv-net), we train a position decoder that takes that representation as input, consisting of a linearclassifier with multinomial probability distribution over the discretized maze locations. Small mazes(510) have 50 locations, large mazes ( 915) have 135 locations, and the I-maze has 77 locations.Note that we do not backpropagate the gradients from the position decoder through the rest of thenetwork. The position decoder can only see the representation exposed by the model, not change it.An example of position decoding by the Nav A3C+ D2agent is shown in Figure 6, where the initialuncertainty in position is improved to near perfect position prediction as more observations areacquired by the agent. We observe that position entropy spikes after a respawn, then decreases oncethe agent acquires certainty about its location. Additionally, videos of the agent’s position decodingare linked in Appendix A. In these complex mazes, where localization is important for the purpose ofreaching the goal, it seems that position accuracy and final score are correlated, as shown in Table1. A pure feed-forward architecture still achieves 64.3% accuracy in a static maze with static goal,suggesting that the encoder memorizes the position in the weights and that this small maze is solvableby all the agents, with sufficient training time. In Random Goal 1, it is Nav A3C+ D2that achievesthe best position decoding performance (85.5% accuracy), whereas the FF A3C and the LSTM A3Carchitectures are at approximately 50%.In the I-maze, the opposite branches of the maze are nearly identical, with the exception of verysparse visual cues. We observe that once the goal is first found, the Nav A3C*+ D1Lagent is capableof directly returning to the correct branch in order to achieve the maximal score. However, the linearposition decoder for this agent is only 68.5% accurate, whereas it is 87.8% in the plain LSTM A3Cagent. We hypothesize that the symmetry of the I-maze will induce a symmetric policy that need notbe sensitive to the exact position of the agent (see analysis below).7Published as a conference paper at ICLR 2017Figure 5: Trajectories of the Nav A3C*+ D1Lagent in the I-maze (left) and of the Nav A3C+ D2random goalmaze 1 (right) over the course of one episode. At the beginning of the episode (gray curve on the map), theagent explores the environment until it finds the goal at some unknown location (red box). During subsequentrespawns (blue path), the agent consistently returns to the goal. The value function, plotted for each episode,rises as the agent approaches the goal. Goals are plotted as vertical red lines.Figure 6: Trajectory of the Nav A3C+ D2agent in the random goal maze 1, overlaid with the position probabilitypredictions predicted by a decoder trained on LSTM hidden activations, taken at 4 steps during an episode.Initial uncertainty gives way to accurate position prediction as the agent navigates.A desired property of navigation agents in our Random Goal tasks is to be able to first find the goal,and reliably return to the goal via an efficient route after subsequent re-spawns. The latency columnin Table 1 shows that the Nav A3C+ D2agents achieve the lowest latency to goal once the goal hasbeen discovered (the first number shows the time in seconds to find the goal the first time, and thesecond number is the average time for subsequent finds). Figure 5 shows clearly how the agent findsthe goal, and directly returns to that goal for the rest of the episode. For Random Goal 2, none of theagents achieve lower latency after initial goal acquisition; this is presumably due to the larger, morechallenging environment.5.2 S TACKED LSTM GOAL ANALYSISFigure 7(a) shows shows the trajectories traversed by an agent for each of the four goal locations.After an initial exploratory phase to find the goal, the agent consistently returns to the goal location.We visualize the agent’s policy by applying tSNE dimension reduction (Maaten & Hinton, 2008)to the cellactivations at each step of the agent for each of the four goal locations. Whilst clusterscorresponding to each of the four goal locations are clearly distinct in the LSTM A3C agent, thereare 2 main clusters in the Nav A3C agent – with trajectories to diagonally opposite arms of the mazerepresented similarly. Given that the action sequence to opposite arms is equivalent (e.g. straight, turnleft twice for top left and bottom right goal locations), this suggests that the Nav A3C policy-dictatingLSTM maintains an efficient representation of 2 sub-policies (i.e. rather than 4 independent policies)– with critical information about the currently relevant goal provided by the additional LSTM.5.3 I NVESTIGATING DIFFERENT COMBINATIONS OF AUXILIARY TASKSOur results suggest that depth prediction from the policy LSTM yields optimal results. However,several other auxiliary tasks have been concurrently introduced in (Jaderberg et al., 2017), and thuswe provide a comparison of reward prediction against depth prediction. Following that paper, weimplemented two additional agent architectures, one performing reward prediction from the convnetusing a replay buffer, called Nav A3C*+ R, and one combining reward prediction from the convnetand depth prediction from the LSTM (Nav A3C+ RD 2). Table 2 suggests that reward prediction (NavA3C*+R) improves upon the plain stacked LSTM architecture (Nav A3C*) but not as much as depthprediction from the policy LSTM (Nav A3C+ D2). Combining reward prediction and depth prediction(Nav A3C+RD 2) yields comparable results to depth prediction alone (Nav A3C+ D2); normalisedaverage AUC values are respectively 0.995 vs. 0.981. Future work will explore other auxiliary tasks.8Published as a conference paper at ICLR 2017(a)Agent trajectories for episodes withdifferent goal locations(b)LSTM activations from A3C agent (c) LSTM activations from NavA3C*+ D1LagentFigure 7: LSTM cell activations of LSTM A3C and Nav A3C*+ D1Lagents from the I-Maze collected overmultiple episodes and reduced to 2 dimensions using tSNE, then coloured to represent the goal location.Policy-dictating LSTM of Nav A3C agent shown.Navigation agent architectureMaze Nav A3C* Nav A3C+ D1 Nav A3C+ D2 Nav A3C+ D1D2 Nav A3C*+ R Nav A3C+ RD2I-Maze 143.3 196.7 203.5 197.2 128.2 191.8Static 1 60.1 103.2 104.3 100.3 86.9 105.1Static 2 59.9 153.1 157.6 151.6 100.6 155.5Random Goal 1 45.5 57.6 71.1 63.2 54.4 72.3Random Goal 2 37.0 66.0 82.1 75.1 68.3 80.1Table 2: Comparison of five navigation agent architectures over five maze configurations with random andstatic goals, including agents performing reward prediction Nav A3C*+ Rand Nav A3C+ RD 2, where rewardprediction is implemented following (Jaderberg et al., 2017). We report the AUC (Area under learning curve),averaged over the best 5 hyperparameters.6 C ONCLUSIONWe proposed a deep RL method, augmented with memory and auxiliary learning targets, for trainingagents to navigate within large and visually rich environments that include frequently changingstart and goal locations. Our results and analysis highlight the utility of un/self-supervised auxiliaryobjectives, namely depth prediction and loop closure, in providing richer training signals that bootstraplearning and enhance data efficiency. Further, we examine the behavior of trained agents, their abilityto localise, and their network activity dynamics, in order to analyse their navigational abilities.Our approach of augmenting deep RL with auxiliary objectives allows end-end learning and mayencourage the development of more general navigation strategies. Notably, our work with auxiliarylosses is related to (Jaderberg et al., 2017) which independently looks at data efficiency whenexploiting auxiliary losses. One difference between the two works is that our auxiliary losses areonline (for the current frame) and do not rely on any form of replay. Also the explored losses are verydifferent in nature. Finally our focus is on the navigation domain and understanding if navigationemerges as a bi-product of solving an RL problem, while Jaderberg et al. (2017) is concerned withdata efficiency for any RL-task.Whilst our best performing agents are relatively successful at navigation, their abilities would bestretched if larger demands were placed on rapid memory (e.g. in procedurally generated mazes),due to the limited capacity of the stacked LSTM in this regard. It will be important in the future tocombine visually complex environments with architectures that make use of external memory (Graveset al., 2016; Weston et al., 2014; Olton et al., 1979) to enhance the navigational abilities of agents.Further, whilst this work has focused on investigating the benefits of auxiliary tasks for developingthe ability to navigate through end-to-end deep reinforcement learning, it would be interesting forfuture work to compare these techniques with SLAM-based approaches.ACKNOWLEDGEMENTS9Published as a conference paper at ICLR 2017We would like to thank Alexander Pritzel, Thomas Degris and Joseph Modayil for useful discussions,Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environment designand development, and Amir Sadik and Sarah York for expert human game testing.REFERENCESTrevor Barron, Matthew Whitehead, and Alan Yeung. Deep reinforcement learning in a 3-d block-world environment. In Deep Reinforcement Learning: Frontiers and Challenges, IJCAI , 2016.Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich KÃijttler,Andrew Lefrancq, Simon Green, Victor Valdes, Amir Sadik, Julian Schrittwieser, Keith Anderson,Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis,Shane Legg, and Stig Petersen. Deepmind lab. In arXiv , 2016. URL https://arxiv.org/abs/1612.03801 .MWM Gamini Dissanayake, Paul Newman, Steve Clark, Hugh F. Durrant-Whyte, and MichaelCsorba. A solution to the simultaneous localization and map building (slam) problem. IEEETransactions on Robotics and Automation , 17(3):229–241, 2001.David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using amulti-scale deep network. In Proc. of Neural Information Processing Systems, NIPS , 2014.Alex Graves, Mohamed Abdelrahman, and Geoffrey Hinton. Speech recognition with deep recurrentneural networks. In Proceedings of the International Conference on Acoustics, Speech and SignalProcessing, ICASSP , 2013.Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi ́nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al.Hybrid computing using a neural network with dynamic external memory. Nature , 2016.Matthew J. Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps.Proc. of Conf. on Artificial Intelligence, AAAI , 2015.Max Jaderberg, V olodymir Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, andKoray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Submitted toInt’l Conference on Learning Representations, ICLR , 2017.Jan Koutnik, Giuseppe Cuccu, JÃijrgen Schmidhuber, and Faustino Gomez. Evolving large-scaleneural networks for vision-based reinforcement learning. In Proceedings of the 15th annualconference on Genetic and evolutionary computation, GECCO , 2013.Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successorreinforcement learning. CoRR , abs/1606.02396, 2016. URL http://arxiv.org/abs/1606.02396 .Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcementlearning. CoRR , 2016. URL http://arxiv.org/abs/1609.05521 .Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrentreinforcement learning: A hybrid approach. In Proceedings of the International Conference onLearning Representations, ICLR , 2016. URL https://arxiv.org/abs/1509.03044 .Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of MachineLearning Research , 9(Nov):2579–2605, 2008.Piotr Mirowski, Marc’Aurelio Ranzato, and Yann LeCun. Dynamic auto-encoders for semanticindexing. In NIPS Deep Learning and Unsupervised Learning Workshop , 2010.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, et al. Human-levelcontrol through deep reinforcement learning. Nature , 518:529–533, 2015.V olodymyr Mnih, Adrià ̆a Puigdomà ́lnech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcementlearning. In Proc. of Int’l Conf. on Machine Learning, ICML , 2016.10Published as a conference paper at ICLR 2017Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, et al. Massivelyparallel methods for deep reinforcement learning. In Proceedings of the International Conferenceon Machine Learning Deep Learning Workshop, ICML , 2015.Karthik Narasimhan, Tejas D. Kulkarni, and Regina Barzilay. Language understanding for text-basedgames using deep reinforcement learning. In Proc. of Empirical Methods in Natural LanguageProcessing, EMNLP , 2015.Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory,active perception, and action in minecraft. In Proc. of International Conference on MachineLearning, ICML , 2016.David S Olton, James T Becker, and Gail E Handelmann. Hippocampus, space, and memory.Behavioral and Brain Sciences , 2(03):313–322, 1979.Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deeprecurrent neural networks. arXiv preprint arXiv:1312.6026 , 2013.Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervisedlearning with ladder networks. In Advances in Neural Information Processing Systems, NIPS ,2015.Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving networkperformance and learning time. In Neural Networks , pp. 120–129. Springer, 1990.Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A frameworkfor temporal abstraction in reinforcement learning. Artificial intelligence , 112(1):181–211, 1999.Lei Tai and Ming Liu. Towards cognitive exploration through deep reinforcement learning for mobilerobots. In arXiv , 2016. URL https://arxiv.org/abs/1610.01733 .Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, and Shie Mannor. A deephierarchical approach to lifelong learning in minecraft. CoRR , abs/1604.07255, 2016. URLhttp://arxiv.org/abs/1604.07255 .Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5 – rmsprop: Divide the gradient by a runningaverage of its recent magnitude. In Coursera: Neural Networks for Machine Learning , volume 4,2012.A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. 2016.Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprintarXiv:1410.3916 , 2014.Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu-pervised objectives for large-scale image classification. In Proc. of International Conference onMachine Learning, ICML , 2016.Junbo Zhao, Michaël Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders.Int’l Conf. on Learning Representations (Workshop), ICLR , 2015. URL http://arxiv.org/abs/1506.02351 .Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and AliFarhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning.CoRR , abs/1609.05143, 2016. URL http://arxiv.org/abs/1609.05143 .11Published as a conference paper at ICLR 2017Supplementary MaterialA V IDEOS OF TRAINED NAVIGATION AGENTSWe show the behaviour of Nav A3C*+ D1Lagent in 5 videos, corresponding to the 5 navigationenvironments: I-maze5, (small) static maze6, (large) static maze7, (small) random goal maze8and(large) random goal maze9. Each video shows a high-resolution video (the actual inputs to the agentare down-sampled to 84 84 RGB images), the value function over time (with fruit reward and goalacquisitions), the layout of the mazes with consecutive trajectories of the agent marked in differentcolours and the output of the trained position decoder, overlayed on top of the maze layout.B N ETWORK ARCHITECTURE AND TRAININGB.1 T HE ONLINE MULTI -LEARNER ALGORITHM FOR MULTI -TASK LEARNINGWe introduce a class of neural network-based agents that have modular structures and that are trainedon multiple tasks, with inputs coming from different modalities (vision, depth, past rewards and pastactions). Implementing our agent architecture is simplified by its modular nature. Essentially, weconstruct multiple networks, one per task, using shared building blocks, and optimise these networksjointly. Some modules, such as the conv-net used for perceiving visual inputs, or the LSTMs used forlearning the navigation policy, are shared among multiple tasks, while other modules, such as depthpredictorgdor loop closure predictor gl, are task-specific. The navigation network that outputs thepolicy and the value function is trained using reinforcement learning, while the depth prediction andloop closure prediction networks are trained using self-supervised learning.Within each thread of the asynchronous training environment, the agent plays on its own episode ofthe game environment, and therefore sees observation and reward pairs f(st;rt)gand takes actionsthat are different from those experienced by agents from the other, parallel threads. Within a thread,the multiple tasks (navigation, depth and loop closure prediction) can be trained at their own schedule,and they add gradients to the shared parameter vector as they arrive. Within each thread, we use aflag-based system to subordinate gradient updates to the A3C reinforcement learning procedure.B.2 N ETWORK AND TRAINING DETAILSFor all the experiments we use an encoder model with 2 convolutional layers followed by a fullyconnected layer, or recurrent layer(s), from which we predict the policy and value function. Thearchitecture is similar to the one in (Mnih et al., 2016). The convolutional layers are as follows. Thefirst convolutional layer has a kernel of size 8x8 and a stride of 4x4, and 16 feature maps. The secondlayer has a kernel of size 4x4 and a stride of 2x2, and 32 feature maps. The fully connected layer,in the FF A3C architecture in Figure 2a has 256 hidden units (and outputs visual features ft). TheLSTM in the LSTM A3C architecture has 256 hidden units (and outputs LSTM hidden activations ht).The LSTMs in Figure 2c and 2d are fed extra inputs (past reward rt1, previous action atexpressedas a one-hot vector of dimension 8 and agent-relative lateral and rotational velocity vtencoded by a6-dimensional vector), which are all concatenated to vector ft. The Nav A3C architectures (Figure2c,d) have a first LSTM with 64 or 128 hiddens and a second LSTM with 256 hiddens. The depthpredictor modules gd,g0dand the loop closure detection module glare all single-layer MLPs with 128hidden units. The depth MLPs are followed by 64 independent 8-dimensional softmax outputs (oneper depth pixel). The loop closure MLP is followed by a 2-dimensional softmax output. We illustrateon Figure 8 the architecture of the Nav A3C+D+L+Dr agent.Depth is taken as the Z-buffer from the Labyrinth environment (with values between 0 and 255),divided by 255 and taken to power 10 to spread the values in interval [0;1]. We empirically decidedto use the following quantization: f0;0:05;0:175;0:3;0:425;0:55;0:675;0:8;1gto ensure a uniform5Video of the Nav A3C*+ D1Lagent on the I-maze: https://youtu.be/PS4iJ7Hk_BU6Video of the Nav A3C*+ D1Lagent on static maze 1: https://youtu.be/-HsjQoIou_c7Video of the Nav A3C*+ D1Lagent on static maze 2: https://youtu.be/kH1AvRAYkbI8Video of the Nav A3C*+ D1Lagent on random goal maze 1: https://youtu.be/5IBT2UADJY09Video of the Nav A3C*+ D1Lagent on random goal maze 2: https://youtu.be/e10mXgBG9yo1Published as a conference paper at ICLR 2017168x8/4x4384x84324x4/2x2256128128264x86425616881xtvtat1rt1⇡Vfthtgl(ht)gd(ft)12864x8gd(ft)gl(ht)’ Figure 8: Details of the architecture of the Nav A3C+D+L+Dr agent, taking in RGB visual inputs xt, pastrewardrt1, previous action at1as well as agent-relative velocity vt, and producing policy , value functionV, depth predictions gd(ft)andg0d(ht)as well as loop closure detection gl(ht).binning across 8 classes. The previous version of the agent had a single depth prediction MLP gdforregressing 816 = 128 depth pixels from the convnet outputs ft.The parameters of each of the modules point to a subset of a common vector of parameters. Weoptimise these parameters using an asynchronous version of RMSProp (Tieleman & Hinton, 2012).(Nair et al., 2015) was a recent example of asynchronous and parallel gradient updates in deepreinforcement learning; in our case, we focus on the specific Asynchronous Advantage Actor Critic(A3C) reinforcement learning procedure in (Mnih et al., 2016).Learning follows closely the paradigm described in (Mnih et al., 2016). We use 16 workers and thesame RMSProp algorithm without momentum or centering of the variance. Gradients are computedover non-overlaping chunks of the episode. The score for each point of a training curve is the averageover all the episodes the model gets to finish in 5e4environment steps.The whole experiments are run for a maximum of 1e8environment step. The agent has an actionrepeat of 4 as in (Mnih et al., 2016), which means that for 4 consecutive steps the agent will use thesame action picked at the beginning of the series. For this reason through out the paper we actuallyreport results in terms of agent perceived steps rather than environment steps. That is, the maximalnumber of agent perceived step that we do for any particular run is 2:5e7.In our grid we sample hyper-parameters from categorical distributions:Learning rate was sampled from [104;5104].Strength of the entropy regularization from [104;103].Rewards were not scaled and not clipped in the new set of experiments. In our previous setof experiments, rewards were scaled by a factor from f0:3;0:5gand clipped to 1 prior toback-propagation in the Advantage Actor-Critic algorithm.Gradients are computed over non-overlaping chunks of 50 or 75 steps of the episode. In ourprevious set of experiments, we used chunks of 100 steps.The auxiliary tasks, when used, have hyperparameters sampled from:Coefficient dof the depth prediction loss from convnet features Ldsampled fromf3:33;10;33g.Coefficient 0dof the depth prediction loss from LSTM hiddens Ld0sampled fromf1;3:33;10g.Coefficientlof the loop closure prediction loss Llsampled fromf1;3:33;10g.Loop closure uses the following thresholds: maximum distance for position similarity 1= 1squareand minimum distance for removing trivial loop-closures 2= 2squares.2Published as a conference paper at ICLR 2017(a)Random Goal maze (small): comparison of reward clipping (b)Random Goal maze (small): comparison of depth predictionFigure 9: Results are averaged over the top 5 random hyperparameters for each agent-task configuration. Star inthe label indicates the use of reward clipping. Please see text for more details.C A DDITIONAL RESULTSC.1 R EWARD CLIPPINGFigure 9 shows additional learning curves. In particular in the left plot we show that the baselines(A3C FF and A3C LSTM) as well as Nav A3C agent without auxiliary losses, perform worse withoutreward clipping than with reward clipping. It seems that removing reward clipping makes learningunstable in absence of auxiliary tasks. For this particular reason we chose to show the baselines withreward clipping in our main results.C.2 D EPTH PREDICTION AS REGRESSION OR CLASSIFICATION TASKSThe right subplot of Figure 9 compares having depth as an input versus as a target. Note that usingRGBD inputs to the Nav A3C agent performs even worse than predicting depth as a regression task,and in general is worse than predicting depth as a classification task.C.3 N ON-NAVIGATION TASKS IN 3D MAZE ENVIRONMENTSWe have evaluated the behaviour of the agents introduced in this paper, as well as agents withreward prediction, introduced in (Jaderberg et al., 2017) (Nav A3C*+ R) and with a combination ofreward prediction from the convnet and depth prediction from the policy LSTM (Nav A3C+ RD 2),on different 3D maze environments with non-navigation specific tasks. In the first environment,Seek-Avoid Arena, there are apples (yielding 1 point) and lemons (yielding -1 point) disposed inan arena, and the agents needs to pick all the apples before respawning; episodes last 20 seconds.The second environment, Stairway to Melon, is a thin square corridor; in one direction, there is alemon followed by a stairway to a melon (10 points, resets the level) and in the other direction are7 apples and a dead end, with the melon visible but not reachable. The agent spawns between thelemon and the apples with a random orientation. Both environments have been released in DeepMindLab (Beattie et al., 2016). These environments do not require navigation skills such as shortest pathplanning, but a simple reward identification (lemon vs. apple or melon) and persistent exploration.As Figure 10 shows, there is no major difference between auxiliary tasks related to depth predictionor reward prediction. Depth prediction boosts the performance of the agent beyond that of the stackedLSTM architecture, hinting at a more general applicability of depth prediction beyond navigationtasks.C.4 S ENSITIVITY TOWARDS HYPER -PARAMETER SAMPLINGFor each of the experiments in this paper, 64 replicas were run with hyperparameters (learning rate,entropy cost) sampled from the same interval. Figure 11 shows that the Nav architectures with3Published as a conference paper at ICLR 2017(a)Seek-Avoid (learning curves) (b)Stairway to Melon (learning curves)(c)Seek-Avoid (layout) (d)Stairway to Melon (layout)Figure 10: Comparison of agent architectures over non-navigation maze configurations, Seek-Avoid Arena andStairway to Melon, described in details in (Beattie et al., 2016). Image credits for (c) and (d): (Jaderberg et al.,2017).(a)Static maze (small) (b)Random Goal maze (large) (c)Random Goal I-mazeFigure 11: Plot of the Area Under the Curve (AUC) of the rewards achieved by the agents, across differentexperiments and on 3 different tasks: large static maze with fixed goals, large static maze with comparable layoutbut with dynamic goals, and the I-maze. The reward AUC values are computed for each replica; 64 replicaswere run per experiment and the reward AUC values are sorted by decreasing value.auxiliary tasks achieve higher results for a comparatively larger number of replicas, hinting at the factthat auxiliary tasks make learning more robust to the choice of hyperparameters.C.5 A SYMPTOTIC PERFORMANCE OF THE AGENTSFinally, we compared the asymptotic performance of the agents, both in terms of navigation (finalrewards obtained at the end of the episode) and in terms of their representation in the policy LSTM.Rather than visualising the convolutional filters, we quantify the change in representation, with and4Published as a conference paper at ICLR 2017Agent architectureFrames Performance LSTM A3C* Nav A3C+ D2120M Score (mean top 5) 57 103Position Acc 33.4 72.4240M Score (mean top 5) 90 114Position Acc 64.1 80.6Table 3: Asymptotic performance analysis of two agents in the Random Goal 2 maze, comparing training for120M Labyrinth frames vs. 240M frames.without auxiliary task, in terms of position decoding, following the approach explained in Section 5.1.Specifically, we compare the baseline agent (LSTM A3C*) to a navigation agent with one auxiliarytask (depth prediction), that gets about twice as many gradient updates for the same number of framesseen in the environment: once for the RL task and once for the auxiliary depth prediction task. AsTable 3 shows, the performance of the baseline agent as well as the position decoding accuracy dosignificantly increase after twice the number of training steps (going from 57 points to 90 points, andfrom 33.4% to 66.5%, but do not reach the performance and position decoding accuracy of the NavA3C+D2agent after half the number of training frames. For this reason, we believe that the auxiliarytask do more than simply accelerate training.5
rJXJeHYEg
Sk2iistgg
ICLR.cc/2017/conference/-/paper128/official/review
{"title": "This paper presents an approach to non-linear kernel dimensionality reduction with a trace norm regularizer in the feature space. The authors proposed an iterative minimization approach in order to obtain a local optimum of a relaxed problem. The paper contains errors and the experimental evaluation is not convincing. Only old techniques are compared against in very toy datasets. ", "rating": "4: Ok but not good enough - rejection", "review": "This paper presents an approach to non-linear kernel dimensionality reduction with a trace norm regularizer in the feature space. The authors proposed an iterative minimization approach in order to obtain a local optimum of a relaxed problem. \nThe paper contains errors and the experimental evaluation is not convincing. Only old techniques are compared against in very toy datasets. \n\nThe authors claim state-of-the-art, however, the oil dataset is not a real benchmark, and the comparisons are to very old approaches. \nThe experimental evaluation should demonstrate robustness to more complex noise and outliers, as this was one of the motivations in the introduction.\n\nThe authors do not address the out-of-sample problem. This is a problem of kernel-based methods vs LVMs, and thus should be address here.\n\n\nThe paper contains errors:\n\n- The last paragraph of section 1 says that this paper proposes a closed form solution to robust KPCA. This is simply wrong, as the proposed approach consists of iteratively solving iterativey a set of closed form updates and Levenberg-Marquard optimizationd. This is not any more closed form!\n\n- In the same paragraph (and later in the text) the authors claim that the proposed approach can be trivially generalized to incorporate other cost functions. This is not true, as in general there will be no more inner loop closed form updates and the authors will need to solve a much more complex optimization problem. \n\n- The third paragraph of section 2 claims that this paper presents a novel energy minimization framework to solve problems of the general form of eq. (2). However, this is not what the authors solve at the end. They solve a different problem that has been subject to at least two relaxations. It is not clear how solving for a local optima of this double relaxed problem is related to the original problem they want to solve. \n\n- The paper says that Geiger et al defined non linearities on a latent space of pre-defined dimensionality. This is wrong. This paper discovers the dimensionality of the latent space by means of a regularizer that encourages the singular values to be sparse. Thus, it does not have a fixed dimensionality, the latent space is just bounded to be smaller or equal than the dimensionality of the original space. \n\n\nIt is not clear to me why the author say for LVMs such as GPLVM that \"the latent space is learned a priority with clean training data\". One can use different noise models within the GP framework. Furthermore, the proposed approach assumes Gaussian noise (see eq. 6), which is also the trivial case for GP-based LVMs. \n\n\nIt is not clear what the authors mean in the paper by \"pre-training\" or saying that techniques do not have a training phase. KPCA is trained via a closed-form update, but there is still training. \n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Non-linear Dimensionality Regularizer for Solving Inverse Problems
["Ravi Garg", "Anders Eriksson", "Ian Reid"]
Consider an ill-posed inverse problem of estimating causal factors from observations, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem requires simultaneous estimation of these factors and learning the low-dimensional representation for them. In this work, we introduce a novel non-linear dimensionality regularization technique for solving such problems without pre-training. We re-formulate Kernel-PCA as an energy minimization problem in which low dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality regularizer in the KPCA framework. Our approach relies on robustly penalizing the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the standard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promising results on CMU mocap dataset despite the presence of significant occlusions and noise.
["Computer vision", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=Sk2iistgg
https://openreview.net/pdf?id=Sk2iistgg
https://openreview.net/forum?id=Sk2iistgg&noteId=rJXJeHYEg
Under review as a conference paper at ICLR 2017NON-LINEAR DIMENSIONALITY REGULARIZER FORSOLVING INVERSE PROBLEMSRavi GargUniversity of Adelaideravi.garg@adelaide.edu.auAnders ErikssonQueensland University of Technologyanders.eriksson@qut.edu.auIan ReidUniversity of Adelaideian.reid@adelaide.edu.auABSTRACTConsider an ill-posed inverse problem of estimating causal factors from observa-tions, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem re-quires simultaneous estimation of these factors and learning the low-dimensionalrepresentation for them. In this work, we introduce a novel non-linear dimension-ality regularization technique for solving such problems without pre-training.We re-formulate Kernel-PCA as an energy minimization problem in which lowdimensionality constraints are introduced as regularization terms in the energy.To the best of our knowledge, ours is the first attempt to create a dimensionalityregularizer in the KPCA framework. Our approach relies on robustly penalizingthe rank of the recovered factors directly in the implicit feature space to createtheir low-dimensional approximations in closed form.Our approach performs robust KPCA in the presence of missing data and noise.We demonstrate state-of-the-art results on predicting missing entries in the stan-dard oil flow dataset. Additionally, we evaluate our method on the challengingproblem of Non-Rigid Structure from Motion and our approach delivers promis-ing results on CMU mocap dataset despite the presence of significant occlusionsand noise.1 I NTRODUCTIONDimensionality reduction techniques are widely used in data modeling, visualization and unsuper-vised learning. Principal component analysis (PCAJolliffe (2002)), Kernel PCA (KPCASch ̈olkopfet al. (1998)) and Latent Variable Models (LVMsLawrence (2005)) are some of the well knowntechniques used to create low dimensional representations of the given data while preserving itssignificant information.One key deployment of low-dimensional modeling occurs in solving ill-posed inference problems.Assuming the valid solutions to the problem lie near a low-dimensional manifold (i.e. can beparametrized with a reduced set of variables) allows for a tractable inference for otherwise under-constrained problems. After the seminal work of Cand `es & Recht (2009); Recht et al. (2010) onguaranteed rank minimization of the matrix via trace norm heuristics Fazel (2002), many ill-posedcomputer vision problems have been tackled by using the trace norm — a convex surrogate of therank function — as a regularization term in an energy minimization frameworkCand `es & Recht(2009); Zhou et al. (2014). The flexible and easy integration of low-rank priors is one of key factorsfor versatility and success of many algorithms. For example, pre-trained active appearance modelsCootes et al. (2001) or 3D morphable models Blanz & Vetter (1999) are converted to robust featuretracking Poling et al. (2014), dense registration Garg et al. (2013b) and vivid reconstructions of natu-ral videos Garg et al. (2013a) with no a priori knowledge of the scene. Various bilinear factorizationproblems like background modeling, structure from motion or photometric stereo are also addressedwith a variational formulation of the trace norm regularization Cabral et al. (2013).1Under review as a conference paper at ICLR 2017On the other hand, although many non-linear dimensionality reduction techniques — in particularKPCA — have been shown to outperform their linear counterparts for many data modeling tasks,they are seldom used to solve inverse problems without using a training phase. A general (discrim-inative) framework for using non-linear dimensionality reduction is: (i) learn a low-dimensionalrepresentation for the data using training examples via the kernel trick (ii) project the test exam-ples on the learned manifold and finally (iii) find a data point (pre-image) corresponding to eachprojection in the input space.This setup has two major disadvantages. Firstly, many problems of interest come with corruptedobservations — noise, missing data and outliers — which violate the low-dimensional modelingassumption.Secondly, computing the pre-image of any point in the low dimensional feature subspaceis non-trivial: the pre-image for many points in the low dimensional space might not even existbecause the non linear feature mapping function used for mapping the data from input space to thefeature space is non-surjective.Previously, extensions to KPCA like Robust KPCA (RKPCANguyen & De la Torre (2009)) andprobabilistic KPCA (PKPCASanguinetti & Lawrence (2006)) with missing data have been proposedto address the first concern, while various additional regularizers have been used to estimate thepre-image robustly Bakir et al. (2004); Mika et al. (1998); Kwok & Tsang (2004); Abrahamsen &Hansen (2009).Generative models like LVMs Lawrence (2005) are often used for inference by searching the low-dimensional latent space for a location which maximizes the likelihood of the observations. Prob-lems like segmentation, tracking and semantic 3D reconstruction Prisacariu & Reid (2011); Dameet al. (2013) greatly benefit from using LVM. However, the latent space is learned a priori with cleantraining data in all these approaches.Almost all non-linear dimensionality reduction techniques are non-trivial to generalize for solvingill-posed problems (See section 4.2) without a pre-training stage. Badly under-constrained problemsrequire the low-dimensional constraints even for finding an initial solution, eliminating applicabilityof the standard “projection + pre-image estimation” paradigm. This hinders the utility of non-linear dimensionality reduction and a suitable regularization technique to penalize the non-lineardimensionality is desirable.S1R1S2R2...Causal Factors3D shapes ( Si) and the projection matrices ( Ri) 1Wi =RiSiFigure 1: Non-linear dimensionality regularizer forNRSfM. The top part of the figure explains the ill-posedinverse problem of recovering the causal factors (1) ;projection matrices Riand 3D structures Si, from 2Dimage observations (2) Wi’s, by minimizing the imagereprojection errorf(W;R;S ) =PikWiRiSik2.Assuming that the recovered 3D structures ( Si’s) liesnear an unknown non-linear manifold (represented bythe blue curve) in the input space, we propose to regu-larize the dimensionality of this manifold (3) — span ofthe non-linearly transformed shape vectors (Si)’s —by minimizingk(S)k. The non-linear transformationis defined implicitly with a Mercer kernel and mapsthe non-linear manifold to a linear low rank subspace(shown in blue line) of RKHS.Sum and Substance: A closer look at mostnon-linear dimensionality reduction techniquesreveals that they rely upon a non-linear map-ping function which maps the data from in-put space to a (usually) higher dimensional fea-ture space. In this feature space the data is as-sumed to lie on a low-dimensional hyperplane— thus, linear low-rank prior is apt in the fea-ture space . Armed with this simple observa-tion, our aim is to focus on incorporating theadvances made in linear dimensionality reduc-tion techniques to their non-linear counterparts,while addressing the problems described above.Figure 1 explains this central idea and proposeddimensionality regularizer in a nutshell withNon Rigid Structure from Motion (NRSfM) asthe example application.Our Contribution: In this work we propose aunified for simultaneous robust KPCA and pre-image estimation while solving an ill-posed in-ference problem without a pre-training stage.In particular we propose a novel robust en-ergy minimization algorithm which handles theimplicitness of the feature space to directlypenalize its rank by iteratively: (i) creatingrobust low-dimensional representation for the2Under review as a conference paper at ICLR 2017data given the kernel matrix in closed form and (ii) reconstructing the noise-free version of the data(pre-image of the features space projections) using the estimated low-dimensional representationsin a unified framework.The proposed algorithm: (i) provides a novel closed form solution to robust KPCA; (ii) yields state-of-the-art results on missing data prediction for the well-known oil flow dataset; (iii) outperformsstate-of-the-art linear dimensionality (rank) regularizers to solve NRSfM; and (iv) can be triviallygeneralized to incorporate other cost functions in an energy minimization framework to solve variousill-posed inference problems.2 P ROBLEM FORMULATIONThis paper focuses on solving a generic inverse problem of recovering causal factor S=[s1; s2;sN]2XNfromNobservations W= [w1; w 2;wN]2YNsuch thatf(W;S) = 0 . Here function f(observation,variable ), is a generic loss function which aligns theobservations Wwith the variable S(possibly via other causal factors. e.g. RorZin Section 4.1and 4.2).If,f(W;S) = 0 is ill-conditioned (for example when YX ), we want to recover matrix Sunderthe assumption that the columns of it lie near a low-dimensional non-linear manifold. This can bedone by solving a constrained optimization problem of the following form:minSrank ((S))s:t: f (W;S) (1)where (S) = [(s1); (s2);; (sN)]2HNis the non-linear mapping of matrix Sfromthe input spaceXto the feature space H(also commonly referred as Reproducing Kernel HilbertSpace), via a non-linear mapping function :X!H associated with a Mercer kernel Ksuch thatK(S)i;j=(si)T(sj).In this paper we present a novel energy minimization framework to solve problems of the generalform (1).As our first contribution, we relax the problem (1) by using the trace norm of (S)— the convexsurrogate of rank function — as a penalization function. The trace norm kMk=:Pii(M)ofa matrixMis the sum of its eigenvalues i(M)and was proposed as a tight convex relaxation1oftherank (M)and is used in many vision problems as a rank regularizer Fazel (2002). Althoughthe rank minimization via trace norm relaxation does not lead to a convex problem in presence ofa non-linear kernel function, we show in 3.2 that it leads to a closed-form solution to denoising akernel matrix via penalizing the rank of recovered data ( S) directly in the feature space.With these changes we can rewrite (1) as:minSf(W;S) +k(S)k (2)whereis a regularization strength.2It is important to notice that although the rank of the kernel matrix K(S)is equal to the rank of(S),kK(S)kis merelyk(S)k2F. Thus, directly penalizing the sum of the singular values ofK(S)will not encourage low-rank in the feature space.3Although we have relaxed the non-convex rank function, (2) is in general difficult to minimizedue to the implicitness of the feature space. Most widely used kernel functions like RBF do nothave a explicit definition of the function . Moreover, the feature space for many kernels is high-(possibly infinite-) dimensional, leading to intractability. These issues are identified as the main1More precisely,kMkwas shown to be the tight convex envelope of rank (M)=kMks, wherekMksrepresent spectral norm of M.21=can also be viewed as Lagrange multiplier to the constraints in (1).3Although it is clear that relaxing the rank of kernel matrix to kK(S)kis suboptimal, works like Huanget al. (2012); Cabral et al. (2013) with a variational definition of nuclear norm, allude to the possibility ofkernelization. Further investigation is required to compare this counterpart to our tighter relaxation.3Under review as a conference paper at ICLR 2017barriers to robust KPCA and pre-image estimation Nguyen & De la Torre (2009). Thus, we have toreformulate (2) by applying kernel trick where the cost function (2) can be expressed in terms of thekernel function alone.The key insight here is that under the assumption that kernel matrix K(S)is positive semidefinite,we can factorize it as: K(S) =CTC. Although, this factorization is non-unique, it is trivial to showthe following:pi(K(S)) =i(C) =i((S))Thus:kCk=k(S)k8C:CTC=K(S) (3)wherei(:)is the function mapping the input matrix to its ithlargest eigenvalue.The row space of matrix Cin (3) can be seen to span the eigenvectors associated with the kernelmatrixK(S)— hence the principal components of the non-linear manifold we want to estimate.Using (3), problem (2) can finally be written as:minS;Cf(W;S) +kCks:t: K (S) =CTC (4)The above minimization can be solved with a soft relaxation of the manifold constraint by assumingthat the columns of Slie near the non-linear manifold.minS;Cf(W;S) +2kK(S)CTCk2F+kCk (5)As!1 , the optimum of (5) approaches the optimum of (4) . A local optimum of (4) can beachieved using the penalty method of Nocedal & Wright (2006) by optimizing (5) while iterativelyincreasingas explained in Section 3.Before moving on, we would like to discuss some alternative interpretations of (5) and its rela-tionship to previous work – in particular LVMs. Intuitively, we can also interpret (5) from theprobabilistic viewpoint as commonly used in latent variable model based approaches to define ker-nel function Lawrence (2005). For example a RBF kernel with additive Gaussian noise and inversewidthcan be defined as: K(S)i;j=eksisjk2+, whereN (0;). In other words, witha finite, our model allows the data points to lie near a non-linear low-rank manifold instead ofon it. Its worth noting here that like LVMs, our energy formulation also attempts to maximize thelikelihood of regenerating the training data W, (by choosing f(W;S)to be a simple least squarescost) while doing dimensionality reduction.Note that in closely related work Geiger et al. (2009), continuous rank penalization (with a loga-rithmic prior) has also been used for robust probabilistic non-linear dimensionality reduction andmodel selection in LVM framework. However, unlike Geiger et al. (2009); Lawrence (2005) wherethe non-linearities are modeled in latent space (of predefined dimensionality), our approach directlypenalizes the non-linear dimensionality of data in a KPCA framework and is applicable to solveinverse problems without pre-training.3 O PTIMIZATIONWe approach the optimization of (5) by solving the following two sub-problems in alternation:minSf(W;S) +2kK(S)CTCk2F (6)minCkCk+2kK(S)CTCk2F (7)Algorithm 1 outlines the approach and we give a detailed description and interpretations of bothsub-problems (7) and (6) in next two sections of the paper.3.1 P RE-IMAGE ESTIMATION TO SOLVE INVERSE PROBLEM .Subproblem (6) can be seen as a generalized pre-image estimation problem: we seek the factor si,which is the pre-image of the projection of (si)onto the principle subspace of the RKHS stored in4Under review as a conference paper at ICLR 2017Algorithm 1: Inference with Proposed Regularizer.Input : Initial estimate S0ofS.Output : Low-dimensional Sand kernel representation C.Parameters : Initial0and maximum max penalty, with scale s.-S=S0;=0;whilemaxdowhile not converged do- FixSand estimate Cvia closed-form solution of (7) using Algorithm 2;- FixCand minimize (6) to update Susing LM algorithm;-=s;CTC, which best explains the observation wi. Here (6) is generally a non-convex problem, unlessthe Mercer-kernel is linear, and must therefore be solved using non-linear optimization techniques.In this work, we use the Levenberg-Marquardt algorithm for optimizing (6).Notice that (6) only computes the pre-image for the feature space projections of the data points withwhich the non-linear manifold (matrix C) is learned. An extension to our formulation is desirableif one wants to use the learned non-linear manifold for denoising test data in a classic pre-imageestimation framework. Although a valuable direction to pursue, it is out of scope of the presentpaper.3.2 R OBUST DIMENSIONALITY REDUCTIONAlgorithm 2: Robust Dimensionality Reduction.Input : Current estimate of S.Output : Low-dimensional representation C.Parameters : Currentand regularization strength .-[UUT]= Singular Value Decomposition of K(S);//is a diagonal matrix, storing Nsingular values iofK(S).fori= 1toNdo- Find three solutions ( lr:r2f1;2;3g) of:l3li+2= 0;- setl4= 0;-lr= max(lr;0)8r2f1;2;3;4g;-r=argminrf2kil2rk2+lrg;-i=lr;-C=UT;//is diagonal matrix storing i.One can interpret sub-problem (7) as a robustform of KPCA where the kernel matrix hasbeen corrupted with Gaussian noise and wewant to generate its low-rank approximation.Although (7) is non-convex we can solve it inclosed-form via singular value decomposition.This closed-form solution is outlined in Algo-rithm 2 and is based on the following theorem:Theorem 1. WithSn3A0letA=UUTdenote its singular value decomposition. ThenminL2jjALTLjj2F+jjLjj (8)=nXi=12(i2i)2+i:(9)A minimizer Lof(8)is given byL= UT(10)with2Dn+,i2f2R+jpi;=2() = 0gSf0g, wherepa;bdenotes the depressed cubicpa;b(x) =x3ax+b.Dn+is the set of n-by-n diagonal matrices with non-negative entries.Theorem 1 shows that each eigenvalue of the minimizer Cof (7) can be obtained by solving adepressed cubic whose coefficients are determined by the corresponding eigenvalue of the kernelmatrix and the regularization strength . The roots of each cubic, together with zero, comprise aset of candidates for the corresponding eigenvalue of C. The best one from this set is obtained bychoosing the value which minimizes (9) (see Algorithm 2).As elaborated in Section 2, problem (7) can be seen as regularizing sum of square root ( L1=2norm)of the eigenvalues of the matrix K(S). In a closely related work Zongben et al. (2012), authorsadvocateL1=2norm as a better approximation for the cardinality of a vector then the more commonlyusedL1norm. A closed form solution for L1=2regularization similar to our work was outlined inZongben et al. (2012) and was shown to outperform the L1vector norm regularization for sparsecoding. To that end, our Theorem 1 and the proposed closed form solution (Algo 2) for (7) can5Under review as a conference paper at ICLR 2017Table 1: Performance comparison on missing data completion on Oil Flow Dataset: Row 1 shows the amountof missing data and subsequent rows show the mean and standard deviation of the error in recovered datamatrix over 50 runs on 100 samples of oil flow dataset by: (1) The mean method (also the initialization ofother methods) where the missing entries are replaced by the mean of the known values of the correspondingattributes, (2) 1-nearest neighbor method in which missing entries are filled by the values of the nearest point,(3) PPCA Tipping & Bishop (1999), (4) PKPCA of Sanguinetti & Lawrence (2006), (5)RKPCA Nguyen & Dela Torre (2009) and our method.p(del) 0.05 0.10 0.25 0.50mean 134 284 709 13971-NN 53 1459020 NAPPCA 3.7.6 92 5010 14030PKPCA 51 123 326 10020RKPCA 3.21.9 84 278 8315Ours 2.32 63 227 7011be seen as generalization of Zongben et al. (2012) to include the L1=2matrix norms for which asimplified proof is included in the Appendix A. It is important to note however, that the motivationand implication of using L1=2regularization in the context of non-linear dimensionality reductionare significantly different to that of Zongben et al. (2012) and related work Du et al. (2013); Zhaoet al. (2014) which are designed for linear modeling of the causal factors. The core insight of usingL1regularization in the feature space via the parametrization given in 3 facilitates a natural way fornon-linear modeling of causal factors with low dimensionality while solving an inverse problem bymaking feature space tractable.4 E XPERIMENTSIn this section we demonstrate the utility of the proposed algorithm. The aims of our experiments aretwofold: (i) to compare our dimensionality reduction technique favorably with KPCA and its robustvariants; and (ii) to demonstrate that the proposed non-linear dimensionality regularizer consistentlyoutperforms its linear counterpart (a.k.a. nuclear norm) in solving inverse problems.4.1 M ATRIX COMPLETIONThe nuclear norm has been introduced as a low rank prior originally for solving the matrix comple-tion problem. Thus, it is natural to evaluate its non-linear extensions on the same task. AssumingW2Rmnto be the input matrix and Za binary matrix specifying the availability of the observa-tions inW, Algorithm 1 can be used for recovering a complete matrix Swith the following choiceoff(W;Z;S ):f(W;Z;S ) =kZ(WS)k2F (11)whererepresents Hadamard product.To demonstrate the robustness of our algorithm for matrix completion problem, we choose 100training samples from the oil flow dataset described in section 3.2 and randomly remove the elementsfrom the data with varying range of probabilities to test the performance of the proposed algorithmagainst various baselines. Following the experimental setup as specified in Sanguinetti & Lawrence(2006), we repeat the experiments with 50 different samples of Z. We report the mean and standarddeviation of the root mean square reconstruction error for our method with the choice of = 0:1,alongside five different methods in Table 1. Our method significantly improves the performance ofmissing data completion compared to other robust extensions of KPCA Tipping & Bishop (1999);Sanguinetti & Lawrence (2006); Nguyen & De la Torre (2009), for every probability of missingdata.Although we restrict our experiments to least-squares cost functions, it is vital to restate here thatour framework could trivially incorporate robust functions like the L1norm instead of the Frobeniusnorm — as a robust data term f(W;Z;S )— to generalize algorithms like Robust PCA Wright et al.(2009) to their non-linear counterparts.4.2 K ERNEL NON -RIGID STRUCTURE FROM MOTION6Under review as a conference paper at ICLR 2017Figure 2: Non-linear dimensionality regular-isation improves NRSfM performance com-pared to its linear counterpart. Figure showsthe ground truth 3D structures in red wire-frameoverlaid with the structures estimated using: (a)proposed non-linear dimensionality regularizershown in blue dots and (b) corresponding lin-ear dimensionality regularizer (TNH) shown inblack crosses, for sample frames of CMU mo-cap sequence. Red circles represent the 3D pointsfor which the projections were known whereassquares annotated missing 2D observations. Seetext and Table 2 for details.Non-rigid structure from motion under orthographyis an ill-posed problem where the goal is to esti-mate the camera locations and 3D structure of a de-formable objects from a collection of 2D imageswhich are labeled with landmark correspondencesBregler et al. (2000). Assuming si(xj)2R3tobe the 3D location of point xjon the deformableobject in the ithimage, its orthographic projectionwi(xj)2R2can be written as wi(x) =Risi(xj),whereRi2R23is a orthographic projection ma-trix Bregler et al. (2000). Notice that as the objectdeforms, even with given camera poses, reconstruct-ing the sequence by least-squares reprojection errorminimization is an ill-posed problem. In their semi-nal work, Bregler et al. (2000) proposed to solve thisproblem with an additional assumption that the re-constructed shapes lie on a low-dimensional linearsubspace and can be parameterized as linear combi-nations of a relatively low number of basis shapes.NRSfM was then cast as the low-rank factorizationproblem of estimating these basis shapes and corre-sponding coefficients.Recent work, like Dai et al. (2014); Garg et al.(2013a) have shown that the trace norm regularizercan be used as a convex envelope of the low-rankprior to robustly address ill-posed nature of the prob-lem. A good solution to NRSfM can be achieved byoptimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F (12)whereSis the shape matrix whose columns are 3Ndimensional vectors storing the 3D coordinatesSi(xj)of the shapes and Zi(xj)is a binary variable indicating if projection of point xjis availablein the image i.Assuming the projection matrices to be fixed, this problem is convex and can be exactly solvedwith standard convex optimization methods. Additionally, if the 2D projections wi(xj)are noisefree, optimizing (12) with very small corresponds to selecting the the solution — out of the manysolutions — with (almost) zero projection error, which has minimum trace norm Dai et al. (2014).Thus henceforth, optimization of (12) is referred as the trace norm heuristics (TNH). We solve thisproblem with a first order primal-dual variant of the algorithm given in Garg et al. (2013a), whichcan handle missing data. The algorithm is detailed and compared favorably with the state of the artNRSfM approaches (based on linear dimensionality regularization) Appendix C.A simple kernel extension of the above optimization problem is:minS;Rk(S)k+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F| {z }f(W;Z;R;S )(13)where (S)is the non-linear mapping of Sto the feature space using an RBF kernel.With fixed projection matrices R, (13) is of the general form (2), for which the local optima can befound using Algorithm 1.7Under review as a conference paper at ICLR 2017Table 2: 3D reconstruction errors for linear and non-linear dimensionality regularization with ground truthcamera poses. Column 1 and 4 gives gives error for TNH while column (2-3) and (5-6) gives the correspondingerror for proposed method with different width of RBF kernel. Row 5 reports the mean error over 4 sequences.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Lineardmaxdmed dmaxdmedDrink 0.0227 0.0114 0.0083 0.0313 0.0248 0.0229Pickup 0.0487 0.0312 0.0279 0.0936 0.0709 0.0658Yoga 0.0344 0.0257 0.0276 0.0828 0.0611 0.0612Stretch 0.0418 0.0286 0.0271 0.0911 0.0694 0.0705Mean 0.0369 0.0242 0.0227 0.0747 0.0565 0.05514.2.1 R ESULTS ON THE CMU DATASETWe use a sub-sampled version of CMU mocap dataset by selecting every 10thframe of the smoothlydeforming human body consisting 41 mocap points used in Dai et al. (2014).4In our experiments we use ground truth camera projection matrices to compare our algorithm againstTNH. The advantage of this setup is that with ground-truth rotation and no noise, we can avoid themodel selection (finding optimal regularization strength ) by setting it low enough. We run theTNH with= 107and use this reconstruction as initialization for Algorithm 1. For the proposedmethod, we set = 104and use following RBF kernel width selection approach:Maximum distance criterion ( dmax): we set the maximum distance in the feature space tobe3. Thus, the kernel matrix entry corresponding to the shape pairs obtained by TNHwith maximum Euclidean distance becomes e9=2.Median distance criterion ( dmed): the kernel matrix entry corresponding to the medianeuclidean distance is set to 0.5.Following the standard protocol in Dai et al. (2014); Akhter et al. (2009), we quantify the recon-struction results with normalized mean 3D errors e3D=1FNPiPjeij, whereeijis the euclideandistance of a reconstructed point jin frameifrom the ground truth, is the mean of standard devi-ation for 3 coordinates for the ground truth 3D structures, and F;N are number of input images andnumber of points reconstructed.Table 2 shows the results of the TNH and non-linear dimensionality regularization based methodsusing the experimental setup explained above, both without missing data and after randomly remov-ing 50% of the image measurements. Our method consistently beats the TNH baseline and improvesthe mean reconstruction error by 40% with full data and by 25% when used with 50% miss-ing data. Figure 2 shows qualitative comparison of the obtained 3D reconstruction using TNH andproposed non-lienar dimensionality regularization technique for some sample frames from varioussequences. We refer readers to Appendix B for results with simultaneous reconstruction pose opti-mization.5 C ONCLUSIONIn this paper we have introduced a novel non-linear dimensionality regularizer which can be incor-porated into an energy minimization framework, while solving an inverse problem. The proposedalgorithm for penalizing the rank of the data in the feature space has been shown to be robust to noiseand missing observations. We have picked NRSfM as an application to substantiate our argumentsand have shown that despite missing data and model noise (such as erroneous camera poses) ouralgorithm significantly outperforms state-of-the-art linear counterparts.Although our algorithm currently uses slow solvers such as the penalty method and is not directlyscalable to very large problems like dense non-rigid reconstruction, we are actively consideringalternatives to overcome these limitations. An extension to estimate pre-images with a problem-4Since our main goal is to validate the usefulness of the proposed non-linear dimensionality regularizer, weopt for a reduced size dataset for more rapid and flexible evaluation.8Under review as a conference paper at ICLR 2017specific loss function is possible, and this will be useful for online inference with pre-learned low-dimensional manifolds.Given the success of non-linear dimensionality reduction in modeling real data and overwhelminguse of the linear dimensionality regularizers in solving real world problems, we expect that pro-posed non-linear dimensionality regularizer will be applicable to a wide variety of unsupervisedinference problems: recommender systems; 3D reconstruction; denoising; shape prior based objectsegmentation; and tracking are all possible applications.REFERENCESTrine Julie Abrahamsen and Lars Kai Hansen. Input space regularization stabilizes pre-images forkernel pca de-noising. In EEE International Workshop on Machine Learning for Signal Process-ing, pp. 1–6, 2009.Ijaz Akhter, Yaser Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion intrajectory space. In Advances in neural information processing systems , pp. 41–48, 2009.Gokhan H Bakir, Jason Weston, and Bernhard Sch ̈olkopf. Learning to find pre-images. Advances inneural information processing systems , 16(7):449–456, 2004.Christopher M Bishop and Gwilym D James. Analysis of multiphase flows using dual-energygamma densitometry and neural networks. Nuclear Instruments and Methods in Physics ResearchSection A: Accelerators, Spectrometers, Detectors and Associated Equipment , 327(2):580–593,1993.V olker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In 26th annualconference on Computer graphics and interactive techniques , pp. 187–194, 1999.Christoph Bregler, Aaron Hertzmann, and Henning Biermann. Recovering non-rigid 3d shape fromimage streams. In IEEE Conference on Computer Vision and Pattern Recognition , pp. 690–696,2000.R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinearfactorization approaches for low-rank matrix decomposition. In International Conference onComputer Vision (ICCV) , 2013.Emmanuel J Cand `es and Benjamin Recht. Exact matrix completion via convex optimization. Foun-dations of Computational mathematics , 9(6):717–772, 2009.Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems withapplications to imaging. Journal of Mathematical Imaging and Vision , 40(1):120–145, 2011.Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEETransactions on pattern analysis and machine intelligence , 23(6):681–685, 2001.Yuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure-from-motion factorization. International Journal of Computer Vision , 107(2):101–122, 2014.Amaury Dame, Victor Adrian Prisacariu, Carl Yuheng Ren, and Ian Reid. Dense reconstructionusing 3d object shape priors. In Computer Vision and Pattern Recognition , pp. 1288–1295. IEEE,2013.Rong Du, Cailian Chen, Zhiyi Zhou, and Xinping Guan. L 1/2-based iterative matrix completion fordata transmission in lossy environment. In Computer Communications Workshops (INFOCOMWKSHPS), 2013 IEEE Conference on , pp. 65–66. IEEE, 2013.Maryam Fazel. Matrix rank minimization with applications . PhD thesis, Stanford University, 2002.Ravi Garg, Anastasios Roussos, and Lourdes Agapito. Dense variational reconstruction of non-rigidsurfaces from monocular video. In Computer Vision and Pattern Recognition , pp. 1272–1279,2013a.9Under review as a conference paper at ICLR 2017Ravi Garg, Anastasios Roussos, and Lourdes Agapito. A variational approach to video registrationwith subspace constraints. International journal of computer vision , 104(3):286–314, 2013b.Andreas Geiger, Raquel Urtasun, and Trevor Darrell. Rank priors for continuous non-linear dimen-sionality reduction. In Computer Vision and Pattern Recognition , pp. 880–887. IEEE, 2009.Paulo FU Gotardo and Aleix M Martinez. Kernel non-rigid structure from motion. In IEEE Inter-national Conference on Computer Vision , pp. 802–809, 2011a.Paulo FU Gotardo and Aleix M Martinez. Non-rigid structure from motion with complementaryrank-3 spaces. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on ,pp. 3065–3072. IEEE, 2011b.Dong Huang, Ricardo Silveira Cabral, and Fernando De la Torre. Robust regression. In EuropeanConference on Computer Vision (ECCV) , 2012.Ian Jolliffe. Principal component analysis . Wiley Online Library, 2002.JT-Y Kwok and Ivor W Tsang. The pre-image problem in kernel methods. IEEE Transactions onNeural Networks, , 15(6):1517–1525, 2004.Neil D Lawrence. Probabilistic non-linear principal component analysis with gaussian process latentvariable models. The Journal of Machine Learning Research , 6:1783–1816, 2005.Sebastian Mika, Bernhard Sch ̈olkopf, Alex J Smola, Klaus-Robert M ̈uller, Matthias Scholz, andGunnar R ̈atsch. Kernel pca and de-noising in feature spaces. In NIPS , volume 4, pp. 7, 1998.Minh Hoai Nguyen and Fernando De la Torre. Robust kernel principal component analysis. InAdvances in Neural Information Processing Systems . 2009.Jorge Nocedal and Stephen J. Wright. Numerical optimization . Springer, New York, 2006.Bryan Poling, Gilad Lerman, and Arthur Szlam. Better feature tracking through subspace con-straints. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on , pp.3454–3461. IEEE, 2014.Victor Adrian Prisacariu and Ian Reid. Nonlinear shape manifolds as shape priors in level set seg-mentation and tracking. In Computer Vision and Pattern Recognition , pp. 2185–2192. IEEE,2011.Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linearmatrix equations via nuclear norm minimization. SIAM review , 52(3):471–501, 2010.Ralph Tyrrell Rockafellar. Conjugate duality and optimization , volume 14. SIAM, 1974.Guido Sanguinetti and Neil D Lawrence. Missing data in kernel pca. In Machine Learning: ECML2006 , pp. 751–758. Springer, 2006.Bernhard Sch ̈olkopf, Alexander Smola, and Klaus-Robert M ̈uller. Nonlinear component analysis asa kernel eigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journalof the Royal Statistical Society: Series B (Statistical Methodology) , 61(3):611–622, 1999.John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal componentanalysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances inNeural Information Processing Systems , pp. 2080–2088. 2009.Qian Zhao, DeYu Meng, and ZongBen Xu. Robust sparse principal component analysis. ScienceChina Information Sciences , 57(9):1–14, 2014.Xiaowei Zhou, Can Yang, Hongyu Zhao, and Weichuan Yu. Low-rank modeling and its applicationsin image analysis. ACM Computing Surveys (CSUR) , 47(2):36, 2014.Xu Zongben, Chang Xiangyu, Xu Fengmin, and Zhang Hai. L1/2 regularization: a thresholdingrepresentation theory and a fast solver. IEEE Transactions on neural networks and learningsystems , 23(7):1013–1027, 2012.10Under review as a conference paper at ICLR 2017A P ROOF OF THEOREM 3.1Proof. We will prove theorem 1 by first establishing a lower bound for (8) and subsequently showing that thislower bound is obtained at Lgiven by (10). The rotational invariance of the entering norms allows us to write(8) as:min2Dn;WTW=I2jjW2WTjj2F+jjjj: (14)Expanding (14) we obtain2min;Wtr22 trW2WT+ tr4+2nXi=1i (15)=2min;WnXi=12i+4i+2i2nXi=1nXj=1w2ij2ji (16)2minnXi=12i22ii+4i+2i(17)=2nXi=1mini02i22ii+4i+2i(18)The inequality in (17) follows directly by applying H ̈older’s inequality to (16) and using the property that thecolumn vectors wiare unitary.Next, withL= UTin (8) we have2jjALTLjj2F+jjLjj=2jj2jj2F+jjjj=nXi=12(i2i)2+i: (19)Finally, since the subproblems in (18) are separable in i, its minimizer must be KKT-points of the individualsubproblems. As the constraints are simple non-negativity constraints, these KKT points are either (positive)stationary points of the objective functions or 0. It is simple to verify that the stationary points are given by theroots of the cubic function pi;=2. Hence it follows that there exists a isuch that22i22ii+4i+2i2(i2i)2+i; (20)8i0, which completes the proof.A.1 V ALIDATING THE CLOSED FORM SOLUTIONGiven the relaxations proposed in Section 2, our assertion that the novel trace regularization basednon-linear dimensionality reduction is robust need to be substantiated. To that end, we evaluate ourclosed-form solution of Algorithm 2 on the standard oil flow dataset introduced in Bishop & James(1993).This dataset comprises 1000 training and 1000 testing data samples, each of which is of 12 dimen-sions and categorized into one of three different classes. We add zero mean Gaussian noise withvarianceto the training data5and recover the low-dimensional manifold for this noisy trainingdataSwith KPCA and contrast this with the results from Algorithm 2. An inverse width of theGaussian kernel = 0:075is used for all the experiments on the oil flow dataset.It is important to note that in this experiment, we only estimate the principal components (and theirvariances) that explain the estimated non-linear manifold, i.e. matrix Cby Algorithm 2, withoutreconstructing the denoised version of the corrupted data samples.Both KPCA and our solution require model selection (choice of rank and respectively) whichis beyond the scope of this paper. Here we resort to evaluate the performance of both methodsunder different parameters settings. To quantify the accuracy of the recovered manifold ( C) we usefollowing criteria:5Note that our formulation assumes Gaussian noise in K(S)where as for this evaluation we add noise to Sdirectly.11Under review as a conference paper at ICLR 2017Table 3: Robust dimensionality reduction accuracy by KPCA versus our closed-form solution on the full oilflow dataset. Columns from left to right represent: (1) standard deviation of the noise in training samples (2-3)Error in the estimated low-dimensional kernel matrix by (2) KPCA and (3) our closed-form solution, (4-5)Nearest neighbor classification error of test data using (4) KPCA and (5) our closed-form solution respectively.Manifold Error Classification ErrorSTD KPCA Our CFS KPCA Our CFS.2 0.1099 0.1068 9.60% 9.60%.3 0.2298 0.2184 19.90% 15.70 %.4 0.3522 0.3339 40.10% 22.20 %0 2 4 6 810 12 14 160.10.150.20.250.30.350.4Rank of kernel matrixManifold error KPCA,σ=.2Ours,σ=.2KPCA,σ=.3Ours,σ=.3KPCA,σ=.4Ours,σ=.4Figure 3: Performance comparison between KPCA and our Robust closed-form solution with dimensionalityregularization on oil flow dataset with additive Gaussian noise of standard deviation . Plots show the normal-ized kernel matrix errors with different rank of the model. Kernel PCA results are shown in dotted line withdiamond while ours are with solid line with a star. Bar-plot show the worst and the best errors obtained by ourmethod for a single rank of recovered kernel matrix.Manifold Error : A good manifold should preserve maximum variance of the data — i.e.it should be able to generate a denoised version K(Sest) =CTCof the noisy kernelmatrixK(S). We define the manifold estimation error as kK(Sest)K(SGT)k2F, whereK(SGT)is the kernel matrix derived using noise free data. Figure 3 shows the manifoldestimation error for KPCA and our method for different rank and parameter respectively.6Classification error: The accuracy of a non-linear manifold is often also tested by the near-est neighbor classification accuracy. We select the estimated manifold which gives mini-mum Manifold Error for both the methods and report 1NN classification error (percentageof misclassified example) of the 1000 test points by projecting them onto estimated mani-folds.B K ERNEL NRS FMWITH CAMERA POSE ESTIMATIONExtended from section 4.2Table 4 shows the reconstruction performance on a more realistic experimental setup, with the mod-ification that the camera projection matrices are initialized with rigid factorization and were refinedwith the shapes by optimizing (2). To solve NRSfM problem with unknown projection matrices,we parameterize each Riwith quaternions and alternate between refining the 3D shapes Sand pro-jection matrices Rusing LM. The regularization strength was selected for the TNH method bygolden section search and parabolic interpolation for every test case independently. This ensures thebest possible performance for the baseline. For our proposed approach was kept to 104for allsequences for both missing data and full data NRSfM. This experimental protocol somewhat disad-vantages the non-linear method, since its performance can be further improved by a judicious choiceof the regularization strength.6Errors from non-noisy kernel matrix can be replaced by cross validating the entries of the kernel matrix formodel selection for more realistic experiment.12Under review as a conference paper at ICLR 2017Table 4: 3D reconstruction errors for linear and non-linear dimensionality regularization with noisy camerapose initialization from rigid factorization and refined in alternation with shape. The format is same as Table 2.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Linear== 104== 104dmaxdmed dmaxdmedDrink 0.0947 0.0926 0.0906 0.0957 0.0942 0.0937Pickup 0.1282 0.1071 0.1059 0.1598 0.1354 0.1339Yoga 0.2912 0.2683 0.2639 0.2821 0.2455 0.2457Stretch 0.1094 0.1043 0.1031 0.1398 0.1459 0.1484Mean 0.1559 0.1430 0.1409 0.1694 0.1552 0.1554However our purpose is primarily to show that the non-linear method adds value even without time-consuming per-sequence tuning. To that end, note that despite large errors in the camera pose esti-mations by TNH and 50% missing measurements, the proposed method shows significant ( 10%)improvements in terms of reconstruction errors, proving our broader claims that non-linear repre-sentations are better suited for modeling real data, and that our robust dimensionality regularizer canimprove inference for ill-posed problems.As suggested by Dai et al. (2014), robust camera pose initialization is beneficial for the structure es-timation. We have used rigid factorization for initializing camera poses here but this can be triviallychanged. We hope that further improvements can be made by choosing better kernel functions, withcross validation based model selection (value of ) and with a more appropriate tuning of kernelwidth. Selecting a suitable kernel and its parameters is crucial for success of kernelized algorithms.It becomes more challenging when no training data is available. We hope to explore other kernelfunctions and parameter selection criteria in our future work.We would also like to contrast our work with Gotardo & Martinez (2011a), which is the only workwe are aware of where non-linear dimensionality reduction is attempted for NRSfM. While esti-mating the shapes lying on a two dimensional non-linear manifold, Gotardo & Martinez (2011a)additionally assumes smooth 3D trajectories (parametrized with a low frequency DCT basis) and apre-defined hard linear rank constraint on 3D shapes. The method relies on sparse approximation ofthe kernel matrix as a proxy for dimensionality reduction. The reported results were hard to replicateunder our experimental setup for a fair comparison due to non-smooth deformations. However, incontrast to Gotardo & Martinez (2011a), our algorithm is applicable in a more general setup, canbe modified to incorporate smoothness priors and robust data terms but more importantly, is flexibleto integrate with a wide range of energy minimization formulations leading to a larger applicabilitybeyond NRSfM.C TNH ALGORITHM FOR NRS FMIn section 4.2, we have compared the proposed non-linear dimensionality reduction prior against avariant of Garg et al. (2013a) which handles missing data by optimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2(21)This problem is convex in Sgiven noise free projection matrix Ri’s but non-differentiable. Tooptimize (21), we first rewrite it in its primal-dual form by dualizing the trace norm7:maxQminS;R <S;Q> +FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2s:t:kQks1 (22)whereQ2RXNstores the dual variables to Sandk:ksrepresent spectral norm (highest eigen-value) of a matrix.7For more details on primal dual formulation and dual norm of the trace norm see Rockafellar (1974); Rechtet al. (2010); Chambolle & Pock (2011).13Under review as a conference paper at ICLR 2017Algorithm 3: Trace norm Heuristics.Input : Initial estimates S0;R0ofSandR.Output : Low-dimensional Sand camera poses R.Parameters : Regularization strength , measurements Wand binary mask Z.-S=S0; R=R0;// set iteration count step size and duals Q-= 0;-= 1=;-Q= 0;while not converged do// projection matrix estimation- FixS;Q and refineRifor every image iwith LM;// steepest descend update for Sijfor each point xjand each frame ifori= 1toFdoforj= 1toNdo-S+1ij=I22+(ZijRTiRi)1(SijQij+RTi(Zijwij));// accelerated steepest ascend update for Q-Q=Q+(2Sn+1Sn);-UDVT= singular value decomposition of Q;-D= min(D;1);-Q+1=UDVT;// Go to next iteration-=+ 1Table 5: 3D reconstruction errors for different NRSfM approaches and our TNH Algorithm given ground truthcamera projection matrices. Results for all the methods (except TNH) are taken from Dai et al. (2014).Dataset PTAAkhter et al. (2009) CSF2Gotardo & Martinez (2011b) BMMDai et al. (2014) TNHDrink 0.0229 0.0215 0.0238 0.0237Pick-up 0.0992 0.0814 0.0497 0.0482Yoga 0.0580 0.0371 0.0334 0.0333Stretch 0.0822 0.0442 0.0456 0.0431We choose quaternions to perametrize the 23camera matrices Rito satisfy orthonormality con-straints as done in Garg et al. (2013a) and optimize the saddle point problem (22) using alternation.In particular, for a single iteration: (i) we optimize the camera poses Ri’s using LM, (ii) take asteepest descend step for updating Sand (ii) a steepest ascend step for updating Qwhich is fol-lowed by projecting its spectral norm to unit ball. Given ground truth camera matrices ( withoutstep (i)), alternation (ii-iii) can be shown to reach global minima of (22). Algorithm 3 outlines TNHalgorithm.As the main manuscript uses NRSfM only as a practical application of our non-linear dimension-ality reduction prior, we have restricted our NRSfM experiments to only compare the proposedmethod against its linear counterpart. For the timely evaluation, the reported experiments we con-ducted on sub-sampled CMU mocap dataset. Here, we supplement the arguments presented in themain manuscript by favorably comparing the linear dimensionality reduction based NRSfM algo-rithm(TNH) to other NRSfM methods on full length CMU mocap sequences.14
ryNp9bzre
Sk2iistgg
ICLR.cc/2017/conference/-/paper128/official/review
{"title": "Not clear", "rating": "3: Clear rejection", "review": "This paper considers an alternate formulation of Kernel PCA with rank constraints incorporated as a regularization term in the objective. The writing is not clear. The focus keeps shifting from estimating \u201ccausal factors\u201d, to nonlinear dimensionality reduction to Kernel PCA to ill-posed inverse problems. The problem reformulation of Kernel PCA uses somewhat standard tricks and it is not clear what are the advantages of the proposed approach over the existing methods as there is no theoretical analysis of the overall approach or empirical comparison with existing state-of-the-art. \n\n- Not sure what the authors mean by \u201ccausal factors\u201d. There is a reference to it in Abstract and in Problem formulation on page 3 without any definition/discussion.\n\n- In KPCA, I am not sure why one is interested in step (iii) outlined on page 2 of finding a pre-image for each\n\n- Authors outline two key disadvantages of the existing KPCA approach. The first one, that of low-dimensional manifold assumption not holding exactly, has received lots of attention in the machine learning literature. It is common to assume that the data lies near a low-dimensional manifold rather than on a low-dimensional manifold. Second disadvantage is somewhat unclear as finding \u201ca data point (pre-image) corresponding to each projection in the input space\u201d is not a standard step in KPCA. \n\n- On page 3, you never define $\\mathcal{X} \\times N$, $\\mathcal{Y} \\times N$, $\\mathcal{H} \\times N$. Clearly, they cannot be cartesian products. I have to assume that notation somehow implies N-tuples. \n\n- On page 3, Section 2, $\\mathcal{X}$ and $\\mathcal{Y}$ are sets. What do you mean by $\\mathcal{Y} \\ll \\mathcal{X}$\n\n- On page 5, $\\mathcal{S}^n$ is never defined. \n\n- Experiments: None of the standard algorithms for matrix completion such as OptSpace or SVT were considered \n\n- Experiments: There is no comparison with alternate existing approaches for Non-rigid structure from motion. \n\n- Proof of the main result Theorem 3.1: To get from (16) to (17) using the Holder inequality (as stated) one would end up with a term that involves sum of fourth powers of weights w_{ij}. Why would they equal to one using the orthonormal constraints? It would be useful to give more details here, as I don\u2019t see how the argument goes through at this point. ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Non-linear Dimensionality Regularizer for Solving Inverse Problems
["Ravi Garg", "Anders Eriksson", "Ian Reid"]
Consider an ill-posed inverse problem of estimating causal factors from observations, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem requires simultaneous estimation of these factors and learning the low-dimensional representation for them. In this work, we introduce a novel non-linear dimensionality regularization technique for solving such problems without pre-training. We re-formulate Kernel-PCA as an energy minimization problem in which low dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality regularizer in the KPCA framework. Our approach relies on robustly penalizing the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the standard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promising results on CMU mocap dataset despite the presence of significant occlusions and noise.
["Computer vision", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=Sk2iistgg
https://openreview.net/pdf?id=Sk2iistgg
https://openreview.net/forum?id=Sk2iistgg&noteId=ryNp9bzre
Under review as a conference paper at ICLR 2017NON-LINEAR DIMENSIONALITY REGULARIZER FORSOLVING INVERSE PROBLEMSRavi GargUniversity of Adelaideravi.garg@adelaide.edu.auAnders ErikssonQueensland University of Technologyanders.eriksson@qut.edu.auIan ReidUniversity of Adelaideian.reid@adelaide.edu.auABSTRACTConsider an ill-posed inverse problem of estimating causal factors from observa-tions, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem re-quires simultaneous estimation of these factors and learning the low-dimensionalrepresentation for them. In this work, we introduce a novel non-linear dimension-ality regularization technique for solving such problems without pre-training.We re-formulate Kernel-PCA as an energy minimization problem in which lowdimensionality constraints are introduced as regularization terms in the energy.To the best of our knowledge, ours is the first attempt to create a dimensionalityregularizer in the KPCA framework. Our approach relies on robustly penalizingthe rank of the recovered factors directly in the implicit feature space to createtheir low-dimensional approximations in closed form.Our approach performs robust KPCA in the presence of missing data and noise.We demonstrate state-of-the-art results on predicting missing entries in the stan-dard oil flow dataset. Additionally, we evaluate our method on the challengingproblem of Non-Rigid Structure from Motion and our approach delivers promis-ing results on CMU mocap dataset despite the presence of significant occlusionsand noise.1 I NTRODUCTIONDimensionality reduction techniques are widely used in data modeling, visualization and unsuper-vised learning. Principal component analysis (PCAJolliffe (2002)), Kernel PCA (KPCASch ̈olkopfet al. (1998)) and Latent Variable Models (LVMsLawrence (2005)) are some of the well knowntechniques used to create low dimensional representations of the given data while preserving itssignificant information.One key deployment of low-dimensional modeling occurs in solving ill-posed inference problems.Assuming the valid solutions to the problem lie near a low-dimensional manifold (i.e. can beparametrized with a reduced set of variables) allows for a tractable inference for otherwise under-constrained problems. After the seminal work of Cand `es & Recht (2009); Recht et al. (2010) onguaranteed rank minimization of the matrix via trace norm heuristics Fazel (2002), many ill-posedcomputer vision problems have been tackled by using the trace norm — a convex surrogate of therank function — as a regularization term in an energy minimization frameworkCand `es & Recht(2009); Zhou et al. (2014). The flexible and easy integration of low-rank priors is one of key factorsfor versatility and success of many algorithms. For example, pre-trained active appearance modelsCootes et al. (2001) or 3D morphable models Blanz & Vetter (1999) are converted to robust featuretracking Poling et al. (2014), dense registration Garg et al. (2013b) and vivid reconstructions of natu-ral videos Garg et al. (2013a) with no a priori knowledge of the scene. Various bilinear factorizationproblems like background modeling, structure from motion or photometric stereo are also addressedwith a variational formulation of the trace norm regularization Cabral et al. (2013).1Under review as a conference paper at ICLR 2017On the other hand, although many non-linear dimensionality reduction techniques — in particularKPCA — have been shown to outperform their linear counterparts for many data modeling tasks,they are seldom used to solve inverse problems without using a training phase. A general (discrim-inative) framework for using non-linear dimensionality reduction is: (i) learn a low-dimensionalrepresentation for the data using training examples via the kernel trick (ii) project the test exam-ples on the learned manifold and finally (iii) find a data point (pre-image) corresponding to eachprojection in the input space.This setup has two major disadvantages. Firstly, many problems of interest come with corruptedobservations — noise, missing data and outliers — which violate the low-dimensional modelingassumption.Secondly, computing the pre-image of any point in the low dimensional feature subspaceis non-trivial: the pre-image for many points in the low dimensional space might not even existbecause the non linear feature mapping function used for mapping the data from input space to thefeature space is non-surjective.Previously, extensions to KPCA like Robust KPCA (RKPCANguyen & De la Torre (2009)) andprobabilistic KPCA (PKPCASanguinetti & Lawrence (2006)) with missing data have been proposedto address the first concern, while various additional regularizers have been used to estimate thepre-image robustly Bakir et al. (2004); Mika et al. (1998); Kwok & Tsang (2004); Abrahamsen &Hansen (2009).Generative models like LVMs Lawrence (2005) are often used for inference by searching the low-dimensional latent space for a location which maximizes the likelihood of the observations. Prob-lems like segmentation, tracking and semantic 3D reconstruction Prisacariu & Reid (2011); Dameet al. (2013) greatly benefit from using LVM. However, the latent space is learned a priori with cleantraining data in all these approaches.Almost all non-linear dimensionality reduction techniques are non-trivial to generalize for solvingill-posed problems (See section 4.2) without a pre-training stage. Badly under-constrained problemsrequire the low-dimensional constraints even for finding an initial solution, eliminating applicabilityof the standard “projection + pre-image estimation” paradigm. This hinders the utility of non-linear dimensionality reduction and a suitable regularization technique to penalize the non-lineardimensionality is desirable.S1R1S2R2...Causal Factors3D shapes ( Si) and the projection matrices ( Ri) 1Wi =RiSiFigure 1: Non-linear dimensionality regularizer forNRSfM. The top part of the figure explains the ill-posedinverse problem of recovering the causal factors (1) ;projection matrices Riand 3D structures Si, from 2Dimage observations (2) Wi’s, by minimizing the imagereprojection errorf(W;R;S ) =PikWiRiSik2.Assuming that the recovered 3D structures ( Si’s) liesnear an unknown non-linear manifold (represented bythe blue curve) in the input space, we propose to regu-larize the dimensionality of this manifold (3) — span ofthe non-linearly transformed shape vectors (Si)’s —by minimizingk(S)k. The non-linear transformationis defined implicitly with a Mercer kernel and mapsthe non-linear manifold to a linear low rank subspace(shown in blue line) of RKHS.Sum and Substance: A closer look at mostnon-linear dimensionality reduction techniquesreveals that they rely upon a non-linear map-ping function which maps the data from in-put space to a (usually) higher dimensional fea-ture space. In this feature space the data is as-sumed to lie on a low-dimensional hyperplane— thus, linear low-rank prior is apt in the fea-ture space . Armed with this simple observa-tion, our aim is to focus on incorporating theadvances made in linear dimensionality reduc-tion techniques to their non-linear counterparts,while addressing the problems described above.Figure 1 explains this central idea and proposeddimensionality regularizer in a nutshell withNon Rigid Structure from Motion (NRSfM) asthe example application.Our Contribution: In this work we propose aunified for simultaneous robust KPCA and pre-image estimation while solving an ill-posed in-ference problem without a pre-training stage.In particular we propose a novel robust en-ergy minimization algorithm which handles theimplicitness of the feature space to directlypenalize its rank by iteratively: (i) creatingrobust low-dimensional representation for the2Under review as a conference paper at ICLR 2017data given the kernel matrix in closed form and (ii) reconstructing the noise-free version of the data(pre-image of the features space projections) using the estimated low-dimensional representationsin a unified framework.The proposed algorithm: (i) provides a novel closed form solution to robust KPCA; (ii) yields state-of-the-art results on missing data prediction for the well-known oil flow dataset; (iii) outperformsstate-of-the-art linear dimensionality (rank) regularizers to solve NRSfM; and (iv) can be triviallygeneralized to incorporate other cost functions in an energy minimization framework to solve variousill-posed inference problems.2 P ROBLEM FORMULATIONThis paper focuses on solving a generic inverse problem of recovering causal factor S=[s1; s2;sN]2XNfromNobservations W= [w1; w 2;wN]2YNsuch thatf(W;S) = 0 . Here function f(observation,variable ), is a generic loss function which aligns theobservations Wwith the variable S(possibly via other causal factors. e.g. RorZin Section 4.1and 4.2).If,f(W;S) = 0 is ill-conditioned (for example when YX ), we want to recover matrix Sunderthe assumption that the columns of it lie near a low-dimensional non-linear manifold. This can bedone by solving a constrained optimization problem of the following form:minSrank ((S))s:t: f (W;S) (1)where (S) = [(s1); (s2);; (sN)]2HNis the non-linear mapping of matrix Sfromthe input spaceXto the feature space H(also commonly referred as Reproducing Kernel HilbertSpace), via a non-linear mapping function :X!H associated with a Mercer kernel Ksuch thatK(S)i;j=(si)T(sj).In this paper we present a novel energy minimization framework to solve problems of the generalform (1).As our first contribution, we relax the problem (1) by using the trace norm of (S)— the convexsurrogate of rank function — as a penalization function. The trace norm kMk=:Pii(M)ofa matrixMis the sum of its eigenvalues i(M)and was proposed as a tight convex relaxation1oftherank (M)and is used in many vision problems as a rank regularizer Fazel (2002). Althoughthe rank minimization via trace norm relaxation does not lead to a convex problem in presence ofa non-linear kernel function, we show in 3.2 that it leads to a closed-form solution to denoising akernel matrix via penalizing the rank of recovered data ( S) directly in the feature space.With these changes we can rewrite (1) as:minSf(W;S) +k(S)k (2)whereis a regularization strength.2It is important to notice that although the rank of the kernel matrix K(S)is equal to the rank of(S),kK(S)kis merelyk(S)k2F. Thus, directly penalizing the sum of the singular values ofK(S)will not encourage low-rank in the feature space.3Although we have relaxed the non-convex rank function, (2) is in general difficult to minimizedue to the implicitness of the feature space. Most widely used kernel functions like RBF do nothave a explicit definition of the function . Moreover, the feature space for many kernels is high-(possibly infinite-) dimensional, leading to intractability. These issues are identified as the main1More precisely,kMkwas shown to be the tight convex envelope of rank (M)=kMks, wherekMksrepresent spectral norm of M.21=can also be viewed as Lagrange multiplier to the constraints in (1).3Although it is clear that relaxing the rank of kernel matrix to kK(S)kis suboptimal, works like Huanget al. (2012); Cabral et al. (2013) with a variational definition of nuclear norm, allude to the possibility ofkernelization. Further investigation is required to compare this counterpart to our tighter relaxation.3Under review as a conference paper at ICLR 2017barriers to robust KPCA and pre-image estimation Nguyen & De la Torre (2009). Thus, we have toreformulate (2) by applying kernel trick where the cost function (2) can be expressed in terms of thekernel function alone.The key insight here is that under the assumption that kernel matrix K(S)is positive semidefinite,we can factorize it as: K(S) =CTC. Although, this factorization is non-unique, it is trivial to showthe following:pi(K(S)) =i(C) =i((S))Thus:kCk=k(S)k8C:CTC=K(S) (3)wherei(:)is the function mapping the input matrix to its ithlargest eigenvalue.The row space of matrix Cin (3) can be seen to span the eigenvectors associated with the kernelmatrixK(S)— hence the principal components of the non-linear manifold we want to estimate.Using (3), problem (2) can finally be written as:minS;Cf(W;S) +kCks:t: K (S) =CTC (4)The above minimization can be solved with a soft relaxation of the manifold constraint by assumingthat the columns of Slie near the non-linear manifold.minS;Cf(W;S) +2kK(S)CTCk2F+kCk (5)As!1 , the optimum of (5) approaches the optimum of (4) . A local optimum of (4) can beachieved using the penalty method of Nocedal & Wright (2006) by optimizing (5) while iterativelyincreasingas explained in Section 3.Before moving on, we would like to discuss some alternative interpretations of (5) and its rela-tionship to previous work – in particular LVMs. Intuitively, we can also interpret (5) from theprobabilistic viewpoint as commonly used in latent variable model based approaches to define ker-nel function Lawrence (2005). For example a RBF kernel with additive Gaussian noise and inversewidthcan be defined as: K(S)i;j=eksisjk2+, whereN (0;). In other words, witha finite, our model allows the data points to lie near a non-linear low-rank manifold instead ofon it. Its worth noting here that like LVMs, our energy formulation also attempts to maximize thelikelihood of regenerating the training data W, (by choosing f(W;S)to be a simple least squarescost) while doing dimensionality reduction.Note that in closely related work Geiger et al. (2009), continuous rank penalization (with a loga-rithmic prior) has also been used for robust probabilistic non-linear dimensionality reduction andmodel selection in LVM framework. However, unlike Geiger et al. (2009); Lawrence (2005) wherethe non-linearities are modeled in latent space (of predefined dimensionality), our approach directlypenalizes the non-linear dimensionality of data in a KPCA framework and is applicable to solveinverse problems without pre-training.3 O PTIMIZATIONWe approach the optimization of (5) by solving the following two sub-problems in alternation:minSf(W;S) +2kK(S)CTCk2F (6)minCkCk+2kK(S)CTCk2F (7)Algorithm 1 outlines the approach and we give a detailed description and interpretations of bothsub-problems (7) and (6) in next two sections of the paper.3.1 P RE-IMAGE ESTIMATION TO SOLVE INVERSE PROBLEM .Subproblem (6) can be seen as a generalized pre-image estimation problem: we seek the factor si,which is the pre-image of the projection of (si)onto the principle subspace of the RKHS stored in4Under review as a conference paper at ICLR 2017Algorithm 1: Inference with Proposed Regularizer.Input : Initial estimate S0ofS.Output : Low-dimensional Sand kernel representation C.Parameters : Initial0and maximum max penalty, with scale s.-S=S0;=0;whilemaxdowhile not converged do- FixSand estimate Cvia closed-form solution of (7) using Algorithm 2;- FixCand minimize (6) to update Susing LM algorithm;-=s;CTC, which best explains the observation wi. Here (6) is generally a non-convex problem, unlessthe Mercer-kernel is linear, and must therefore be solved using non-linear optimization techniques.In this work, we use the Levenberg-Marquardt algorithm for optimizing (6).Notice that (6) only computes the pre-image for the feature space projections of the data points withwhich the non-linear manifold (matrix C) is learned. An extension to our formulation is desirableif one wants to use the learned non-linear manifold for denoising test data in a classic pre-imageestimation framework. Although a valuable direction to pursue, it is out of scope of the presentpaper.3.2 R OBUST DIMENSIONALITY REDUCTIONAlgorithm 2: Robust Dimensionality Reduction.Input : Current estimate of S.Output : Low-dimensional representation C.Parameters : Currentand regularization strength .-[UUT]= Singular Value Decomposition of K(S);//is a diagonal matrix, storing Nsingular values iofK(S).fori= 1toNdo- Find three solutions ( lr:r2f1;2;3g) of:l3li+2= 0;- setl4= 0;-lr= max(lr;0)8r2f1;2;3;4g;-r=argminrf2kil2rk2+lrg;-i=lr;-C=UT;//is diagonal matrix storing i.One can interpret sub-problem (7) as a robustform of KPCA where the kernel matrix hasbeen corrupted with Gaussian noise and wewant to generate its low-rank approximation.Although (7) is non-convex we can solve it inclosed-form via singular value decomposition.This closed-form solution is outlined in Algo-rithm 2 and is based on the following theorem:Theorem 1. WithSn3A0letA=UUTdenote its singular value decomposition. ThenminL2jjALTLjj2F+jjLjj (8)=nXi=12(i2i)2+i:(9)A minimizer Lof(8)is given byL= UT(10)with2Dn+,i2f2R+jpi;=2() = 0gSf0g, wherepa;bdenotes the depressed cubicpa;b(x) =x3ax+b.Dn+is the set of n-by-n diagonal matrices with non-negative entries.Theorem 1 shows that each eigenvalue of the minimizer Cof (7) can be obtained by solving adepressed cubic whose coefficients are determined by the corresponding eigenvalue of the kernelmatrix and the regularization strength . The roots of each cubic, together with zero, comprise aset of candidates for the corresponding eigenvalue of C. The best one from this set is obtained bychoosing the value which minimizes (9) (see Algorithm 2).As elaborated in Section 2, problem (7) can be seen as regularizing sum of square root ( L1=2norm)of the eigenvalues of the matrix K(S). In a closely related work Zongben et al. (2012), authorsadvocateL1=2norm as a better approximation for the cardinality of a vector then the more commonlyusedL1norm. A closed form solution for L1=2regularization similar to our work was outlined inZongben et al. (2012) and was shown to outperform the L1vector norm regularization for sparsecoding. To that end, our Theorem 1 and the proposed closed form solution (Algo 2) for (7) can5Under review as a conference paper at ICLR 2017Table 1: Performance comparison on missing data completion on Oil Flow Dataset: Row 1 shows the amountof missing data and subsequent rows show the mean and standard deviation of the error in recovered datamatrix over 50 runs on 100 samples of oil flow dataset by: (1) The mean method (also the initialization ofother methods) where the missing entries are replaced by the mean of the known values of the correspondingattributes, (2) 1-nearest neighbor method in which missing entries are filled by the values of the nearest point,(3) PPCA Tipping & Bishop (1999), (4) PKPCA of Sanguinetti & Lawrence (2006), (5)RKPCA Nguyen & Dela Torre (2009) and our method.p(del) 0.05 0.10 0.25 0.50mean 134 284 709 13971-NN 53 1459020 NAPPCA 3.7.6 92 5010 14030PKPCA 51 123 326 10020RKPCA 3.21.9 84 278 8315Ours 2.32 63 227 7011be seen as generalization of Zongben et al. (2012) to include the L1=2matrix norms for which asimplified proof is included in the Appendix A. It is important to note however, that the motivationand implication of using L1=2regularization in the context of non-linear dimensionality reductionare significantly different to that of Zongben et al. (2012) and related work Du et al. (2013); Zhaoet al. (2014) which are designed for linear modeling of the causal factors. The core insight of usingL1regularization in the feature space via the parametrization given in 3 facilitates a natural way fornon-linear modeling of causal factors with low dimensionality while solving an inverse problem bymaking feature space tractable.4 E XPERIMENTSIn this section we demonstrate the utility of the proposed algorithm. The aims of our experiments aretwofold: (i) to compare our dimensionality reduction technique favorably with KPCA and its robustvariants; and (ii) to demonstrate that the proposed non-linear dimensionality regularizer consistentlyoutperforms its linear counterpart (a.k.a. nuclear norm) in solving inverse problems.4.1 M ATRIX COMPLETIONThe nuclear norm has been introduced as a low rank prior originally for solving the matrix comple-tion problem. Thus, it is natural to evaluate its non-linear extensions on the same task. AssumingW2Rmnto be the input matrix and Za binary matrix specifying the availability of the observa-tions inW, Algorithm 1 can be used for recovering a complete matrix Swith the following choiceoff(W;Z;S ):f(W;Z;S ) =kZ(WS)k2F (11)whererepresents Hadamard product.To demonstrate the robustness of our algorithm for matrix completion problem, we choose 100training samples from the oil flow dataset described in section 3.2 and randomly remove the elementsfrom the data with varying range of probabilities to test the performance of the proposed algorithmagainst various baselines. Following the experimental setup as specified in Sanguinetti & Lawrence(2006), we repeat the experiments with 50 different samples of Z. We report the mean and standarddeviation of the root mean square reconstruction error for our method with the choice of = 0:1,alongside five different methods in Table 1. Our method significantly improves the performance ofmissing data completion compared to other robust extensions of KPCA Tipping & Bishop (1999);Sanguinetti & Lawrence (2006); Nguyen & De la Torre (2009), for every probability of missingdata.Although we restrict our experiments to least-squares cost functions, it is vital to restate here thatour framework could trivially incorporate robust functions like the L1norm instead of the Frobeniusnorm — as a robust data term f(W;Z;S )— to generalize algorithms like Robust PCA Wright et al.(2009) to their non-linear counterparts.4.2 K ERNEL NON -RIGID STRUCTURE FROM MOTION6Under review as a conference paper at ICLR 2017Figure 2: Non-linear dimensionality regular-isation improves NRSfM performance com-pared to its linear counterpart. Figure showsthe ground truth 3D structures in red wire-frameoverlaid with the structures estimated using: (a)proposed non-linear dimensionality regularizershown in blue dots and (b) corresponding lin-ear dimensionality regularizer (TNH) shown inblack crosses, for sample frames of CMU mo-cap sequence. Red circles represent the 3D pointsfor which the projections were known whereassquares annotated missing 2D observations. Seetext and Table 2 for details.Non-rigid structure from motion under orthographyis an ill-posed problem where the goal is to esti-mate the camera locations and 3D structure of a de-formable objects from a collection of 2D imageswhich are labeled with landmark correspondencesBregler et al. (2000). Assuming si(xj)2R3tobe the 3D location of point xjon the deformableobject in the ithimage, its orthographic projectionwi(xj)2R2can be written as wi(x) =Risi(xj),whereRi2R23is a orthographic projection ma-trix Bregler et al. (2000). Notice that as the objectdeforms, even with given camera poses, reconstruct-ing the sequence by least-squares reprojection errorminimization is an ill-posed problem. In their semi-nal work, Bregler et al. (2000) proposed to solve thisproblem with an additional assumption that the re-constructed shapes lie on a low-dimensional linearsubspace and can be parameterized as linear combi-nations of a relatively low number of basis shapes.NRSfM was then cast as the low-rank factorizationproblem of estimating these basis shapes and corre-sponding coefficients.Recent work, like Dai et al. (2014); Garg et al.(2013a) have shown that the trace norm regularizercan be used as a convex envelope of the low-rankprior to robustly address ill-posed nature of the prob-lem. A good solution to NRSfM can be achieved byoptimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F (12)whereSis the shape matrix whose columns are 3Ndimensional vectors storing the 3D coordinatesSi(xj)of the shapes and Zi(xj)is a binary variable indicating if projection of point xjis availablein the image i.Assuming the projection matrices to be fixed, this problem is convex and can be exactly solvedwith standard convex optimization methods. Additionally, if the 2D projections wi(xj)are noisefree, optimizing (12) with very small corresponds to selecting the the solution — out of the manysolutions — with (almost) zero projection error, which has minimum trace norm Dai et al. (2014).Thus henceforth, optimization of (12) is referred as the trace norm heuristics (TNH). We solve thisproblem with a first order primal-dual variant of the algorithm given in Garg et al. (2013a), whichcan handle missing data. The algorithm is detailed and compared favorably with the state of the artNRSfM approaches (based on linear dimensionality regularization) Appendix C.A simple kernel extension of the above optimization problem is:minS;Rk(S)k+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F| {z }f(W;Z;R;S )(13)where (S)is the non-linear mapping of Sto the feature space using an RBF kernel.With fixed projection matrices R, (13) is of the general form (2), for which the local optima can befound using Algorithm 1.7Under review as a conference paper at ICLR 2017Table 2: 3D reconstruction errors for linear and non-linear dimensionality regularization with ground truthcamera poses. Column 1 and 4 gives gives error for TNH while column (2-3) and (5-6) gives the correspondingerror for proposed method with different width of RBF kernel. Row 5 reports the mean error over 4 sequences.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Lineardmaxdmed dmaxdmedDrink 0.0227 0.0114 0.0083 0.0313 0.0248 0.0229Pickup 0.0487 0.0312 0.0279 0.0936 0.0709 0.0658Yoga 0.0344 0.0257 0.0276 0.0828 0.0611 0.0612Stretch 0.0418 0.0286 0.0271 0.0911 0.0694 0.0705Mean 0.0369 0.0242 0.0227 0.0747 0.0565 0.05514.2.1 R ESULTS ON THE CMU DATASETWe use a sub-sampled version of CMU mocap dataset by selecting every 10thframe of the smoothlydeforming human body consisting 41 mocap points used in Dai et al. (2014).4In our experiments we use ground truth camera projection matrices to compare our algorithm againstTNH. The advantage of this setup is that with ground-truth rotation and no noise, we can avoid themodel selection (finding optimal regularization strength ) by setting it low enough. We run theTNH with= 107and use this reconstruction as initialization for Algorithm 1. For the proposedmethod, we set = 104and use following RBF kernel width selection approach:Maximum distance criterion ( dmax): we set the maximum distance in the feature space tobe3. Thus, the kernel matrix entry corresponding to the shape pairs obtained by TNHwith maximum Euclidean distance becomes e9=2.Median distance criterion ( dmed): the kernel matrix entry corresponding to the medianeuclidean distance is set to 0.5.Following the standard protocol in Dai et al. (2014); Akhter et al. (2009), we quantify the recon-struction results with normalized mean 3D errors e3D=1FNPiPjeij, whereeijis the euclideandistance of a reconstructed point jin frameifrom the ground truth, is the mean of standard devi-ation for 3 coordinates for the ground truth 3D structures, and F;N are number of input images andnumber of points reconstructed.Table 2 shows the results of the TNH and non-linear dimensionality regularization based methodsusing the experimental setup explained above, both without missing data and after randomly remov-ing 50% of the image measurements. Our method consistently beats the TNH baseline and improvesthe mean reconstruction error by 40% with full data and by 25% when used with 50% miss-ing data. Figure 2 shows qualitative comparison of the obtained 3D reconstruction using TNH andproposed non-lienar dimensionality regularization technique for some sample frames from varioussequences. We refer readers to Appendix B for results with simultaneous reconstruction pose opti-mization.5 C ONCLUSIONIn this paper we have introduced a novel non-linear dimensionality regularizer which can be incor-porated into an energy minimization framework, while solving an inverse problem. The proposedalgorithm for penalizing the rank of the data in the feature space has been shown to be robust to noiseand missing observations. We have picked NRSfM as an application to substantiate our argumentsand have shown that despite missing data and model noise (such as erroneous camera poses) ouralgorithm significantly outperforms state-of-the-art linear counterparts.Although our algorithm currently uses slow solvers such as the penalty method and is not directlyscalable to very large problems like dense non-rigid reconstruction, we are actively consideringalternatives to overcome these limitations. An extension to estimate pre-images with a problem-4Since our main goal is to validate the usefulness of the proposed non-linear dimensionality regularizer, weopt for a reduced size dataset for more rapid and flexible evaluation.8Under review as a conference paper at ICLR 2017specific loss function is possible, and this will be useful for online inference with pre-learned low-dimensional manifolds.Given the success of non-linear dimensionality reduction in modeling real data and overwhelminguse of the linear dimensionality regularizers in solving real world problems, we expect that pro-posed non-linear dimensionality regularizer will be applicable to a wide variety of unsupervisedinference problems: recommender systems; 3D reconstruction; denoising; shape prior based objectsegmentation; and tracking are all possible applications.REFERENCESTrine Julie Abrahamsen and Lars Kai Hansen. Input space regularization stabilizes pre-images forkernel pca de-noising. In EEE International Workshop on Machine Learning for Signal Process-ing, pp. 1–6, 2009.Ijaz Akhter, Yaser Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion intrajectory space. In Advances in neural information processing systems , pp. 41–48, 2009.Gokhan H Bakir, Jason Weston, and Bernhard Sch ̈olkopf. Learning to find pre-images. Advances inneural information processing systems , 16(7):449–456, 2004.Christopher M Bishop and Gwilym D James. Analysis of multiphase flows using dual-energygamma densitometry and neural networks. Nuclear Instruments and Methods in Physics ResearchSection A: Accelerators, Spectrometers, Detectors and Associated Equipment , 327(2):580–593,1993.V olker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In 26th annualconference on Computer graphics and interactive techniques , pp. 187–194, 1999.Christoph Bregler, Aaron Hertzmann, and Henning Biermann. Recovering non-rigid 3d shape fromimage streams. In IEEE Conference on Computer Vision and Pattern Recognition , pp. 690–696,2000.R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinearfactorization approaches for low-rank matrix decomposition. In International Conference onComputer Vision (ICCV) , 2013.Emmanuel J Cand `es and Benjamin Recht. Exact matrix completion via convex optimization. Foun-dations of Computational mathematics , 9(6):717–772, 2009.Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems withapplications to imaging. Journal of Mathematical Imaging and Vision , 40(1):120–145, 2011.Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEETransactions on pattern analysis and machine intelligence , 23(6):681–685, 2001.Yuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure-from-motion factorization. International Journal of Computer Vision , 107(2):101–122, 2014.Amaury Dame, Victor Adrian Prisacariu, Carl Yuheng Ren, and Ian Reid. Dense reconstructionusing 3d object shape priors. In Computer Vision and Pattern Recognition , pp. 1288–1295. IEEE,2013.Rong Du, Cailian Chen, Zhiyi Zhou, and Xinping Guan. L 1/2-based iterative matrix completion fordata transmission in lossy environment. In Computer Communications Workshops (INFOCOMWKSHPS), 2013 IEEE Conference on , pp. 65–66. IEEE, 2013.Maryam Fazel. Matrix rank minimization with applications . PhD thesis, Stanford University, 2002.Ravi Garg, Anastasios Roussos, and Lourdes Agapito. Dense variational reconstruction of non-rigidsurfaces from monocular video. In Computer Vision and Pattern Recognition , pp. 1272–1279,2013a.9Under review as a conference paper at ICLR 2017Ravi Garg, Anastasios Roussos, and Lourdes Agapito. A variational approach to video registrationwith subspace constraints. International journal of computer vision , 104(3):286–314, 2013b.Andreas Geiger, Raquel Urtasun, and Trevor Darrell. Rank priors for continuous non-linear dimen-sionality reduction. In Computer Vision and Pattern Recognition , pp. 880–887. IEEE, 2009.Paulo FU Gotardo and Aleix M Martinez. Kernel non-rigid structure from motion. In IEEE Inter-national Conference on Computer Vision , pp. 802–809, 2011a.Paulo FU Gotardo and Aleix M Martinez. Non-rigid structure from motion with complementaryrank-3 spaces. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on ,pp. 3065–3072. IEEE, 2011b.Dong Huang, Ricardo Silveira Cabral, and Fernando De la Torre. Robust regression. In EuropeanConference on Computer Vision (ECCV) , 2012.Ian Jolliffe. Principal component analysis . Wiley Online Library, 2002.JT-Y Kwok and Ivor W Tsang. The pre-image problem in kernel methods. IEEE Transactions onNeural Networks, , 15(6):1517–1525, 2004.Neil D Lawrence. Probabilistic non-linear principal component analysis with gaussian process latentvariable models. The Journal of Machine Learning Research , 6:1783–1816, 2005.Sebastian Mika, Bernhard Sch ̈olkopf, Alex J Smola, Klaus-Robert M ̈uller, Matthias Scholz, andGunnar R ̈atsch. Kernel pca and de-noising in feature spaces. In NIPS , volume 4, pp. 7, 1998.Minh Hoai Nguyen and Fernando De la Torre. Robust kernel principal component analysis. InAdvances in Neural Information Processing Systems . 2009.Jorge Nocedal and Stephen J. Wright. Numerical optimization . Springer, New York, 2006.Bryan Poling, Gilad Lerman, and Arthur Szlam. Better feature tracking through subspace con-straints. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on , pp.3454–3461. IEEE, 2014.Victor Adrian Prisacariu and Ian Reid. Nonlinear shape manifolds as shape priors in level set seg-mentation and tracking. In Computer Vision and Pattern Recognition , pp. 2185–2192. IEEE,2011.Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linearmatrix equations via nuclear norm minimization. SIAM review , 52(3):471–501, 2010.Ralph Tyrrell Rockafellar. Conjugate duality and optimization , volume 14. SIAM, 1974.Guido Sanguinetti and Neil D Lawrence. Missing data in kernel pca. In Machine Learning: ECML2006 , pp. 751–758. Springer, 2006.Bernhard Sch ̈olkopf, Alexander Smola, and Klaus-Robert M ̈uller. Nonlinear component analysis asa kernel eigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journalof the Royal Statistical Society: Series B (Statistical Methodology) , 61(3):611–622, 1999.John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal componentanalysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances inNeural Information Processing Systems , pp. 2080–2088. 2009.Qian Zhao, DeYu Meng, and ZongBen Xu. Robust sparse principal component analysis. ScienceChina Information Sciences , 57(9):1–14, 2014.Xiaowei Zhou, Can Yang, Hongyu Zhao, and Weichuan Yu. Low-rank modeling and its applicationsin image analysis. ACM Computing Surveys (CSUR) , 47(2):36, 2014.Xu Zongben, Chang Xiangyu, Xu Fengmin, and Zhang Hai. L1/2 regularization: a thresholdingrepresentation theory and a fast solver. IEEE Transactions on neural networks and learningsystems , 23(7):1013–1027, 2012.10Under review as a conference paper at ICLR 2017A P ROOF OF THEOREM 3.1Proof. We will prove theorem 1 by first establishing a lower bound for (8) and subsequently showing that thislower bound is obtained at Lgiven by (10). The rotational invariance of the entering norms allows us to write(8) as:min2Dn;WTW=I2jjW2WTjj2F+jjjj: (14)Expanding (14) we obtain2min;Wtr22 trW2WT+ tr4+2nXi=1i (15)=2min;WnXi=12i+4i+2i2nXi=1nXj=1w2ij2ji (16)2minnXi=12i22ii+4i+2i(17)=2nXi=1mini02i22ii+4i+2i(18)The inequality in (17) follows directly by applying H ̈older’s inequality to (16) and using the property that thecolumn vectors wiare unitary.Next, withL= UTin (8) we have2jjALTLjj2F+jjLjj=2jj2jj2F+jjjj=nXi=12(i2i)2+i: (19)Finally, since the subproblems in (18) are separable in i, its minimizer must be KKT-points of the individualsubproblems. As the constraints are simple non-negativity constraints, these KKT points are either (positive)stationary points of the objective functions or 0. It is simple to verify that the stationary points are given by theroots of the cubic function pi;=2. Hence it follows that there exists a isuch that22i22ii+4i+2i2(i2i)2+i; (20)8i0, which completes the proof.A.1 V ALIDATING THE CLOSED FORM SOLUTIONGiven the relaxations proposed in Section 2, our assertion that the novel trace regularization basednon-linear dimensionality reduction is robust need to be substantiated. To that end, we evaluate ourclosed-form solution of Algorithm 2 on the standard oil flow dataset introduced in Bishop & James(1993).This dataset comprises 1000 training and 1000 testing data samples, each of which is of 12 dimen-sions and categorized into one of three different classes. We add zero mean Gaussian noise withvarianceto the training data5and recover the low-dimensional manifold for this noisy trainingdataSwith KPCA and contrast this with the results from Algorithm 2. An inverse width of theGaussian kernel = 0:075is used for all the experiments on the oil flow dataset.It is important to note that in this experiment, we only estimate the principal components (and theirvariances) that explain the estimated non-linear manifold, i.e. matrix Cby Algorithm 2, withoutreconstructing the denoised version of the corrupted data samples.Both KPCA and our solution require model selection (choice of rank and respectively) whichis beyond the scope of this paper. Here we resort to evaluate the performance of both methodsunder different parameters settings. To quantify the accuracy of the recovered manifold ( C) we usefollowing criteria:5Note that our formulation assumes Gaussian noise in K(S)where as for this evaluation we add noise to Sdirectly.11Under review as a conference paper at ICLR 2017Table 3: Robust dimensionality reduction accuracy by KPCA versus our closed-form solution on the full oilflow dataset. Columns from left to right represent: (1) standard deviation of the noise in training samples (2-3)Error in the estimated low-dimensional kernel matrix by (2) KPCA and (3) our closed-form solution, (4-5)Nearest neighbor classification error of test data using (4) KPCA and (5) our closed-form solution respectively.Manifold Error Classification ErrorSTD KPCA Our CFS KPCA Our CFS.2 0.1099 0.1068 9.60% 9.60%.3 0.2298 0.2184 19.90% 15.70 %.4 0.3522 0.3339 40.10% 22.20 %0 2 4 6 810 12 14 160.10.150.20.250.30.350.4Rank of kernel matrixManifold error KPCA,σ=.2Ours,σ=.2KPCA,σ=.3Ours,σ=.3KPCA,σ=.4Ours,σ=.4Figure 3: Performance comparison between KPCA and our Robust closed-form solution with dimensionalityregularization on oil flow dataset with additive Gaussian noise of standard deviation . Plots show the normal-ized kernel matrix errors with different rank of the model. Kernel PCA results are shown in dotted line withdiamond while ours are with solid line with a star. Bar-plot show the worst and the best errors obtained by ourmethod for a single rank of recovered kernel matrix.Manifold Error : A good manifold should preserve maximum variance of the data — i.e.it should be able to generate a denoised version K(Sest) =CTCof the noisy kernelmatrixK(S). We define the manifold estimation error as kK(Sest)K(SGT)k2F, whereK(SGT)is the kernel matrix derived using noise free data. Figure 3 shows the manifoldestimation error for KPCA and our method for different rank and parameter respectively.6Classification error: The accuracy of a non-linear manifold is often also tested by the near-est neighbor classification accuracy. We select the estimated manifold which gives mini-mum Manifold Error for both the methods and report 1NN classification error (percentageof misclassified example) of the 1000 test points by projecting them onto estimated mani-folds.B K ERNEL NRS FMWITH CAMERA POSE ESTIMATIONExtended from section 4.2Table 4 shows the reconstruction performance on a more realistic experimental setup, with the mod-ification that the camera projection matrices are initialized with rigid factorization and were refinedwith the shapes by optimizing (2). To solve NRSfM problem with unknown projection matrices,we parameterize each Riwith quaternions and alternate between refining the 3D shapes Sand pro-jection matrices Rusing LM. The regularization strength was selected for the TNH method bygolden section search and parabolic interpolation for every test case independently. This ensures thebest possible performance for the baseline. For our proposed approach was kept to 104for allsequences for both missing data and full data NRSfM. This experimental protocol somewhat disad-vantages the non-linear method, since its performance can be further improved by a judicious choiceof the regularization strength.6Errors from non-noisy kernel matrix can be replaced by cross validating the entries of the kernel matrix formodel selection for more realistic experiment.12Under review as a conference paper at ICLR 2017Table 4: 3D reconstruction errors for linear and non-linear dimensionality regularization with noisy camerapose initialization from rigid factorization and refined in alternation with shape. The format is same as Table 2.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Linear== 104== 104dmaxdmed dmaxdmedDrink 0.0947 0.0926 0.0906 0.0957 0.0942 0.0937Pickup 0.1282 0.1071 0.1059 0.1598 0.1354 0.1339Yoga 0.2912 0.2683 0.2639 0.2821 0.2455 0.2457Stretch 0.1094 0.1043 0.1031 0.1398 0.1459 0.1484Mean 0.1559 0.1430 0.1409 0.1694 0.1552 0.1554However our purpose is primarily to show that the non-linear method adds value even without time-consuming per-sequence tuning. To that end, note that despite large errors in the camera pose esti-mations by TNH and 50% missing measurements, the proposed method shows significant ( 10%)improvements in terms of reconstruction errors, proving our broader claims that non-linear repre-sentations are better suited for modeling real data, and that our robust dimensionality regularizer canimprove inference for ill-posed problems.As suggested by Dai et al. (2014), robust camera pose initialization is beneficial for the structure es-timation. We have used rigid factorization for initializing camera poses here but this can be triviallychanged. We hope that further improvements can be made by choosing better kernel functions, withcross validation based model selection (value of ) and with a more appropriate tuning of kernelwidth. Selecting a suitable kernel and its parameters is crucial for success of kernelized algorithms.It becomes more challenging when no training data is available. We hope to explore other kernelfunctions and parameter selection criteria in our future work.We would also like to contrast our work with Gotardo & Martinez (2011a), which is the only workwe are aware of where non-linear dimensionality reduction is attempted for NRSfM. While esti-mating the shapes lying on a two dimensional non-linear manifold, Gotardo & Martinez (2011a)additionally assumes smooth 3D trajectories (parametrized with a low frequency DCT basis) and apre-defined hard linear rank constraint on 3D shapes. The method relies on sparse approximation ofthe kernel matrix as a proxy for dimensionality reduction. The reported results were hard to replicateunder our experimental setup for a fair comparison due to non-smooth deformations. However, incontrast to Gotardo & Martinez (2011a), our algorithm is applicable in a more general setup, canbe modified to incorporate smoothness priors and robust data terms but more importantly, is flexibleto integrate with a wide range of energy minimization formulations leading to a larger applicabilitybeyond NRSfM.C TNH ALGORITHM FOR NRS FMIn section 4.2, we have compared the proposed non-linear dimensionality reduction prior against avariant of Garg et al. (2013a) which handles missing data by optimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2(21)This problem is convex in Sgiven noise free projection matrix Ri’s but non-differentiable. Tooptimize (21), we first rewrite it in its primal-dual form by dualizing the trace norm7:maxQminS;R <S;Q> +FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2s:t:kQks1 (22)whereQ2RXNstores the dual variables to Sandk:ksrepresent spectral norm (highest eigen-value) of a matrix.7For more details on primal dual formulation and dual norm of the trace norm see Rockafellar (1974); Rechtet al. (2010); Chambolle & Pock (2011).13Under review as a conference paper at ICLR 2017Algorithm 3: Trace norm Heuristics.Input : Initial estimates S0;R0ofSandR.Output : Low-dimensional Sand camera poses R.Parameters : Regularization strength , measurements Wand binary mask Z.-S=S0; R=R0;// set iteration count step size and duals Q-= 0;-= 1=;-Q= 0;while not converged do// projection matrix estimation- FixS;Q and refineRifor every image iwith LM;// steepest descend update for Sijfor each point xjand each frame ifori= 1toFdoforj= 1toNdo-S+1ij=I22+(ZijRTiRi)1(SijQij+RTi(Zijwij));// accelerated steepest ascend update for Q-Q=Q+(2Sn+1Sn);-UDVT= singular value decomposition of Q;-D= min(D;1);-Q+1=UDVT;// Go to next iteration-=+ 1Table 5: 3D reconstruction errors for different NRSfM approaches and our TNH Algorithm given ground truthcamera projection matrices. Results for all the methods (except TNH) are taken from Dai et al. (2014).Dataset PTAAkhter et al. (2009) CSF2Gotardo & Martinez (2011b) BMMDai et al. (2014) TNHDrink 0.0229 0.0215 0.0238 0.0237Pick-up 0.0992 0.0814 0.0497 0.0482Yoga 0.0580 0.0371 0.0334 0.0333Stretch 0.0822 0.0442 0.0456 0.0431We choose quaternions to perametrize the 23camera matrices Rito satisfy orthonormality con-straints as done in Garg et al. (2013a) and optimize the saddle point problem (22) using alternation.In particular, for a single iteration: (i) we optimize the camera poses Ri’s using LM, (ii) take asteepest descend step for updating Sand (ii) a steepest ascend step for updating Qwhich is fol-lowed by projecting its spectral norm to unit ball. Given ground truth camera matrices ( withoutstep (i)), alternation (ii-iii) can be shown to reach global minima of (22). Algorithm 3 outlines TNHalgorithm.As the main manuscript uses NRSfM only as a practical application of our non-linear dimension-ality reduction prior, we have restricted our NRSfM experiments to only compare the proposedmethod against its linear counterpart. For the timely evaluation, the reported experiments we con-ducted on sub-sampled CMU mocap dataset. Here, we supplement the arguments presented in themain manuscript by favorably comparing the linear dimensionality reduction based NRSfM algo-rithm(TNH) to other NRSfM methods on full length CMU mocap sequences.14
B1yevitBg
Sk2iistgg
ICLR.cc/2017/conference/-/paper128/official/review
{"title": "Lacking in several aspects; limited novelty", "rating": "4: Ok but not good enough - rejection", "review": "The paper proposes a nonlinear regularizer for solving ill-posed inverse problems. The latent variables (or causal factors) corresponding to the observed data are assumed to lie near a low dimensional subspace in an RKHS induced by a predetermined kernel. The proposed regularizer can be seen as an extension of the linear low-rank assumption on the latent factors. A nuclear norm penalty on the Cholesky factor of the kernel matrix is used as a relaxation for the dimensionality of the subspace. Empirical results are reported on two tasks involving linear inverse problems -- missing feature imputation, and estimating non-rigid 3D structures from a sequence of 2D orthographic projections -- and the proposed method is shown to outperform linear low-rank regularizer. \n\nThe clarity of the paper has scope for improvement (particularly, Introduction) - the back and forth b/w dimensionality reduction techniques and inverse problems is confusing at times. Clearly defining the ill-posed inverse problem first and then motivating the need for a regularizer (which brings dimensionality reduction techniques into the picture) may be a more clear flow in my opinion. \n\nThe motivation behind relaxation of rank() in Eq 1 to nuclear-norm in Eq 2 is not clear to me in this setting. The relaxation does not yield a convex problem over S,C (Eq 5) and also increases the computations (Algo 2 needs to do full SVD of K(S) every time). The authors should discuss pros/cons over the alternate approach that fixes the rank of C (which can be selected using cross-validation, in the same way as $\\tau$ is selected), leaving just the first two terms in Eq 5. For this simpler objective, an interesting question to ask would be -- are there kernel functions for which it can solved in a scalable manner? \n\nThe proposed alternating optimization approach in the current form is computationally intensive and seems hard to scale to even moderate sized data -- in every iteration one needs to compute the kernel matrix over S and perform full SVD over the kernel matrix (Algo 2). Empirical evaluations are also not extensive -- (i) the dataset used for feature imputation is old and non-standard, (ii) for structure estimation from motion on CMU dataset, the paper only compares with linear low-rank regularization, (iii) there is no comment/study on the convergence of the alternating procedure (Algo 1). \n\n\n\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Non-linear Dimensionality Regularizer for Solving Inverse Problems
["Ravi Garg", "Anders Eriksson", "Ian Reid"]
Consider an ill-posed inverse problem of estimating causal factors from observations, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem requires simultaneous estimation of these factors and learning the low-dimensional representation for them. In this work, we introduce a novel non-linear dimensionality regularization technique for solving such problems without pre-training. We re-formulate Kernel-PCA as an energy minimization problem in which low dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality regularizer in the KPCA framework. Our approach relies on robustly penalizing the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the standard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promising results on CMU mocap dataset despite the presence of significant occlusions and noise.
["Computer vision", "Optimization", "Structured prediction"]
https://openreview.net/forum?id=Sk2iistgg
https://openreview.net/pdf?id=Sk2iistgg
https://openreview.net/forum?id=Sk2iistgg&noteId=B1yevitBg
Under review as a conference paper at ICLR 2017NON-LINEAR DIMENSIONALITY REGULARIZER FORSOLVING INVERSE PROBLEMSRavi GargUniversity of Adelaideravi.garg@adelaide.edu.auAnders ErikssonQueensland University of Technologyanders.eriksson@qut.edu.auIan ReidUniversity of Adelaideian.reid@adelaide.edu.auABSTRACTConsider an ill-posed inverse problem of estimating causal factors from observa-tions, one of which is known to lie near some (unknown) low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel. Solving this problem re-quires simultaneous estimation of these factors and learning the low-dimensionalrepresentation for them. In this work, we introduce a novel non-linear dimension-ality regularization technique for solving such problems without pre-training.We re-formulate Kernel-PCA as an energy minimization problem in which lowdimensionality constraints are introduced as regularization terms in the energy.To the best of our knowledge, ours is the first attempt to create a dimensionalityregularizer in the KPCA framework. Our approach relies on robustly penalizingthe rank of the recovered factors directly in the implicit feature space to createtheir low-dimensional approximations in closed form.Our approach performs robust KPCA in the presence of missing data and noise.We demonstrate state-of-the-art results on predicting missing entries in the stan-dard oil flow dataset. Additionally, we evaluate our method on the challengingproblem of Non-Rigid Structure from Motion and our approach delivers promis-ing results on CMU mocap dataset despite the presence of significant occlusionsand noise.1 I NTRODUCTIONDimensionality reduction techniques are widely used in data modeling, visualization and unsuper-vised learning. Principal component analysis (PCAJolliffe (2002)), Kernel PCA (KPCASch ̈olkopfet al. (1998)) and Latent Variable Models (LVMsLawrence (2005)) are some of the well knowntechniques used to create low dimensional representations of the given data while preserving itssignificant information.One key deployment of low-dimensional modeling occurs in solving ill-posed inference problems.Assuming the valid solutions to the problem lie near a low-dimensional manifold (i.e. can beparametrized with a reduced set of variables) allows for a tractable inference for otherwise under-constrained problems. After the seminal work of Cand `es & Recht (2009); Recht et al. (2010) onguaranteed rank minimization of the matrix via trace norm heuristics Fazel (2002), many ill-posedcomputer vision problems have been tackled by using the trace norm — a convex surrogate of therank function — as a regularization term in an energy minimization frameworkCand `es & Recht(2009); Zhou et al. (2014). The flexible and easy integration of low-rank priors is one of key factorsfor versatility and success of many algorithms. For example, pre-trained active appearance modelsCootes et al. (2001) or 3D morphable models Blanz & Vetter (1999) are converted to robust featuretracking Poling et al. (2014), dense registration Garg et al. (2013b) and vivid reconstructions of natu-ral videos Garg et al. (2013a) with no a priori knowledge of the scene. Various bilinear factorizationproblems like background modeling, structure from motion or photometric stereo are also addressedwith a variational formulation of the trace norm regularization Cabral et al. (2013).1Under review as a conference paper at ICLR 2017On the other hand, although many non-linear dimensionality reduction techniques — in particularKPCA — have been shown to outperform their linear counterparts for many data modeling tasks,they are seldom used to solve inverse problems without using a training phase. A general (discrim-inative) framework for using non-linear dimensionality reduction is: (i) learn a low-dimensionalrepresentation for the data using training examples via the kernel trick (ii) project the test exam-ples on the learned manifold and finally (iii) find a data point (pre-image) corresponding to eachprojection in the input space.This setup has two major disadvantages. Firstly, many problems of interest come with corruptedobservations — noise, missing data and outliers — which violate the low-dimensional modelingassumption.Secondly, computing the pre-image of any point in the low dimensional feature subspaceis non-trivial: the pre-image for many points in the low dimensional space might not even existbecause the non linear feature mapping function used for mapping the data from input space to thefeature space is non-surjective.Previously, extensions to KPCA like Robust KPCA (RKPCANguyen & De la Torre (2009)) andprobabilistic KPCA (PKPCASanguinetti & Lawrence (2006)) with missing data have been proposedto address the first concern, while various additional regularizers have been used to estimate thepre-image robustly Bakir et al. (2004); Mika et al. (1998); Kwok & Tsang (2004); Abrahamsen &Hansen (2009).Generative models like LVMs Lawrence (2005) are often used for inference by searching the low-dimensional latent space for a location which maximizes the likelihood of the observations. Prob-lems like segmentation, tracking and semantic 3D reconstruction Prisacariu & Reid (2011); Dameet al. (2013) greatly benefit from using LVM. However, the latent space is learned a priori with cleantraining data in all these approaches.Almost all non-linear dimensionality reduction techniques are non-trivial to generalize for solvingill-posed problems (See section 4.2) without a pre-training stage. Badly under-constrained problemsrequire the low-dimensional constraints even for finding an initial solution, eliminating applicabilityof the standard “projection + pre-image estimation” paradigm. This hinders the utility of non-linear dimensionality reduction and a suitable regularization technique to penalize the non-lineardimensionality is desirable.S1R1S2R2...Causal Factors3D shapes ( Si) and the projection matrices ( Ri) 1Wi =RiSiFigure 1: Non-linear dimensionality regularizer forNRSfM. The top part of the figure explains the ill-posedinverse problem of recovering the causal factors (1) ;projection matrices Riand 3D structures Si, from 2Dimage observations (2) Wi’s, by minimizing the imagereprojection errorf(W;R;S ) =PikWiRiSik2.Assuming that the recovered 3D structures ( Si’s) liesnear an unknown non-linear manifold (represented bythe blue curve) in the input space, we propose to regu-larize the dimensionality of this manifold (3) — span ofthe non-linearly transformed shape vectors (Si)’s —by minimizingk(S)k. The non-linear transformationis defined implicitly with a Mercer kernel and mapsthe non-linear manifold to a linear low rank subspace(shown in blue line) of RKHS.Sum and Substance: A closer look at mostnon-linear dimensionality reduction techniquesreveals that they rely upon a non-linear map-ping function which maps the data from in-put space to a (usually) higher dimensional fea-ture space. In this feature space the data is as-sumed to lie on a low-dimensional hyperplane— thus, linear low-rank prior is apt in the fea-ture space . Armed with this simple observa-tion, our aim is to focus on incorporating theadvances made in linear dimensionality reduc-tion techniques to their non-linear counterparts,while addressing the problems described above.Figure 1 explains this central idea and proposeddimensionality regularizer in a nutshell withNon Rigid Structure from Motion (NRSfM) asthe example application.Our Contribution: In this work we propose aunified for simultaneous robust KPCA and pre-image estimation while solving an ill-posed in-ference problem without a pre-training stage.In particular we propose a novel robust en-ergy minimization algorithm which handles theimplicitness of the feature space to directlypenalize its rank by iteratively: (i) creatingrobust low-dimensional representation for the2Under review as a conference paper at ICLR 2017data given the kernel matrix in closed form and (ii) reconstructing the noise-free version of the data(pre-image of the features space projections) using the estimated low-dimensional representationsin a unified framework.The proposed algorithm: (i) provides a novel closed form solution to robust KPCA; (ii) yields state-of-the-art results on missing data prediction for the well-known oil flow dataset; (iii) outperformsstate-of-the-art linear dimensionality (rank) regularizers to solve NRSfM; and (iv) can be triviallygeneralized to incorporate other cost functions in an energy minimization framework to solve variousill-posed inference problems.2 P ROBLEM FORMULATIONThis paper focuses on solving a generic inverse problem of recovering causal factor S=[s1; s2;sN]2XNfromNobservations W= [w1; w 2;wN]2YNsuch thatf(W;S) = 0 . Here function f(observation,variable ), is a generic loss function which aligns theobservations Wwith the variable S(possibly via other causal factors. e.g. RorZin Section 4.1and 4.2).If,f(W;S) = 0 is ill-conditioned (for example when YX ), we want to recover matrix Sunderthe assumption that the columns of it lie near a low-dimensional non-linear manifold. This can bedone by solving a constrained optimization problem of the following form:minSrank ((S))s:t: f (W;S) (1)where (S) = [(s1); (s2);; (sN)]2HNis the non-linear mapping of matrix Sfromthe input spaceXto the feature space H(also commonly referred as Reproducing Kernel HilbertSpace), via a non-linear mapping function :X!H associated with a Mercer kernel Ksuch thatK(S)i;j=(si)T(sj).In this paper we present a novel energy minimization framework to solve problems of the generalform (1).As our first contribution, we relax the problem (1) by using the trace norm of (S)— the convexsurrogate of rank function — as a penalization function. The trace norm kMk=:Pii(M)ofa matrixMis the sum of its eigenvalues i(M)and was proposed as a tight convex relaxation1oftherank (M)and is used in many vision problems as a rank regularizer Fazel (2002). Althoughthe rank minimization via trace norm relaxation does not lead to a convex problem in presence ofa non-linear kernel function, we show in 3.2 that it leads to a closed-form solution to denoising akernel matrix via penalizing the rank of recovered data ( S) directly in the feature space.With these changes we can rewrite (1) as:minSf(W;S) +k(S)k (2)whereis a regularization strength.2It is important to notice that although the rank of the kernel matrix K(S)is equal to the rank of(S),kK(S)kis merelyk(S)k2F. Thus, directly penalizing the sum of the singular values ofK(S)will not encourage low-rank in the feature space.3Although we have relaxed the non-convex rank function, (2) is in general difficult to minimizedue to the implicitness of the feature space. Most widely used kernel functions like RBF do nothave a explicit definition of the function . Moreover, the feature space for many kernels is high-(possibly infinite-) dimensional, leading to intractability. These issues are identified as the main1More precisely,kMkwas shown to be the tight convex envelope of rank (M)=kMks, wherekMksrepresent spectral norm of M.21=can also be viewed as Lagrange multiplier to the constraints in (1).3Although it is clear that relaxing the rank of kernel matrix to kK(S)kis suboptimal, works like Huanget al. (2012); Cabral et al. (2013) with a variational definition of nuclear norm, allude to the possibility ofkernelization. Further investigation is required to compare this counterpart to our tighter relaxation.3Under review as a conference paper at ICLR 2017barriers to robust KPCA and pre-image estimation Nguyen & De la Torre (2009). Thus, we have toreformulate (2) by applying kernel trick where the cost function (2) can be expressed in terms of thekernel function alone.The key insight here is that under the assumption that kernel matrix K(S)is positive semidefinite,we can factorize it as: K(S) =CTC. Although, this factorization is non-unique, it is trivial to showthe following:pi(K(S)) =i(C) =i((S))Thus:kCk=k(S)k8C:CTC=K(S) (3)wherei(:)is the function mapping the input matrix to its ithlargest eigenvalue.The row space of matrix Cin (3) can be seen to span the eigenvectors associated with the kernelmatrixK(S)— hence the principal components of the non-linear manifold we want to estimate.Using (3), problem (2) can finally be written as:minS;Cf(W;S) +kCks:t: K (S) =CTC (4)The above minimization can be solved with a soft relaxation of the manifold constraint by assumingthat the columns of Slie near the non-linear manifold.minS;Cf(W;S) +2kK(S)CTCk2F+kCk (5)As!1 , the optimum of (5) approaches the optimum of (4) . A local optimum of (4) can beachieved using the penalty method of Nocedal & Wright (2006) by optimizing (5) while iterativelyincreasingas explained in Section 3.Before moving on, we would like to discuss some alternative interpretations of (5) and its rela-tionship to previous work – in particular LVMs. Intuitively, we can also interpret (5) from theprobabilistic viewpoint as commonly used in latent variable model based approaches to define ker-nel function Lawrence (2005). For example a RBF kernel with additive Gaussian noise and inversewidthcan be defined as: K(S)i;j=eksisjk2+, whereN (0;). In other words, witha finite, our model allows the data points to lie near a non-linear low-rank manifold instead ofon it. Its worth noting here that like LVMs, our energy formulation also attempts to maximize thelikelihood of regenerating the training data W, (by choosing f(W;S)to be a simple least squarescost) while doing dimensionality reduction.Note that in closely related work Geiger et al. (2009), continuous rank penalization (with a loga-rithmic prior) has also been used for robust probabilistic non-linear dimensionality reduction andmodel selection in LVM framework. However, unlike Geiger et al. (2009); Lawrence (2005) wherethe non-linearities are modeled in latent space (of predefined dimensionality), our approach directlypenalizes the non-linear dimensionality of data in a KPCA framework and is applicable to solveinverse problems without pre-training.3 O PTIMIZATIONWe approach the optimization of (5) by solving the following two sub-problems in alternation:minSf(W;S) +2kK(S)CTCk2F (6)minCkCk+2kK(S)CTCk2F (7)Algorithm 1 outlines the approach and we give a detailed description and interpretations of bothsub-problems (7) and (6) in next two sections of the paper.3.1 P RE-IMAGE ESTIMATION TO SOLVE INVERSE PROBLEM .Subproblem (6) can be seen as a generalized pre-image estimation problem: we seek the factor si,which is the pre-image of the projection of (si)onto the principle subspace of the RKHS stored in4Under review as a conference paper at ICLR 2017Algorithm 1: Inference with Proposed Regularizer.Input : Initial estimate S0ofS.Output : Low-dimensional Sand kernel representation C.Parameters : Initial0and maximum max penalty, with scale s.-S=S0;=0;whilemaxdowhile not converged do- FixSand estimate Cvia closed-form solution of (7) using Algorithm 2;- FixCand minimize (6) to update Susing LM algorithm;-=s;CTC, which best explains the observation wi. Here (6) is generally a non-convex problem, unlessthe Mercer-kernel is linear, and must therefore be solved using non-linear optimization techniques.In this work, we use the Levenberg-Marquardt algorithm for optimizing (6).Notice that (6) only computes the pre-image for the feature space projections of the data points withwhich the non-linear manifold (matrix C) is learned. An extension to our formulation is desirableif one wants to use the learned non-linear manifold for denoising test data in a classic pre-imageestimation framework. Although a valuable direction to pursue, it is out of scope of the presentpaper.3.2 R OBUST DIMENSIONALITY REDUCTIONAlgorithm 2: Robust Dimensionality Reduction.Input : Current estimate of S.Output : Low-dimensional representation C.Parameters : Currentand regularization strength .-[UUT]= Singular Value Decomposition of K(S);//is a diagonal matrix, storing Nsingular values iofK(S).fori= 1toNdo- Find three solutions ( lr:r2f1;2;3g) of:l3li+2= 0;- setl4= 0;-lr= max(lr;0)8r2f1;2;3;4g;-r=argminrf2kil2rk2+lrg;-i=lr;-C=UT;//is diagonal matrix storing i.One can interpret sub-problem (7) as a robustform of KPCA where the kernel matrix hasbeen corrupted with Gaussian noise and wewant to generate its low-rank approximation.Although (7) is non-convex we can solve it inclosed-form via singular value decomposition.This closed-form solution is outlined in Algo-rithm 2 and is based on the following theorem:Theorem 1. WithSn3A0letA=UUTdenote its singular value decomposition. ThenminL2jjALTLjj2F+jjLjj (8)=nXi=12(i2i)2+i:(9)A minimizer Lof(8)is given byL= UT(10)with2Dn+,i2f2R+jpi;=2() = 0gSf0g, wherepa;bdenotes the depressed cubicpa;b(x) =x3ax+b.Dn+is the set of n-by-n diagonal matrices with non-negative entries.Theorem 1 shows that each eigenvalue of the minimizer Cof (7) can be obtained by solving adepressed cubic whose coefficients are determined by the corresponding eigenvalue of the kernelmatrix and the regularization strength . The roots of each cubic, together with zero, comprise aset of candidates for the corresponding eigenvalue of C. The best one from this set is obtained bychoosing the value which minimizes (9) (see Algorithm 2).As elaborated in Section 2, problem (7) can be seen as regularizing sum of square root ( L1=2norm)of the eigenvalues of the matrix K(S). In a closely related work Zongben et al. (2012), authorsadvocateL1=2norm as a better approximation for the cardinality of a vector then the more commonlyusedL1norm. A closed form solution for L1=2regularization similar to our work was outlined inZongben et al. (2012) and was shown to outperform the L1vector norm regularization for sparsecoding. To that end, our Theorem 1 and the proposed closed form solution (Algo 2) for (7) can5Under review as a conference paper at ICLR 2017Table 1: Performance comparison on missing data completion on Oil Flow Dataset: Row 1 shows the amountof missing data and subsequent rows show the mean and standard deviation of the error in recovered datamatrix over 50 runs on 100 samples of oil flow dataset by: (1) The mean method (also the initialization ofother methods) where the missing entries are replaced by the mean of the known values of the correspondingattributes, (2) 1-nearest neighbor method in which missing entries are filled by the values of the nearest point,(3) PPCA Tipping & Bishop (1999), (4) PKPCA of Sanguinetti & Lawrence (2006), (5)RKPCA Nguyen & Dela Torre (2009) and our method.p(del) 0.05 0.10 0.25 0.50mean 134 284 709 13971-NN 53 1459020 NAPPCA 3.7.6 92 5010 14030PKPCA 51 123 326 10020RKPCA 3.21.9 84 278 8315Ours 2.32 63 227 7011be seen as generalization of Zongben et al. (2012) to include the L1=2matrix norms for which asimplified proof is included in the Appendix A. It is important to note however, that the motivationand implication of using L1=2regularization in the context of non-linear dimensionality reductionare significantly different to that of Zongben et al. (2012) and related work Du et al. (2013); Zhaoet al. (2014) which are designed for linear modeling of the causal factors. The core insight of usingL1regularization in the feature space via the parametrization given in 3 facilitates a natural way fornon-linear modeling of causal factors with low dimensionality while solving an inverse problem bymaking feature space tractable.4 E XPERIMENTSIn this section we demonstrate the utility of the proposed algorithm. The aims of our experiments aretwofold: (i) to compare our dimensionality reduction technique favorably with KPCA and its robustvariants; and (ii) to demonstrate that the proposed non-linear dimensionality regularizer consistentlyoutperforms its linear counterpart (a.k.a. nuclear norm) in solving inverse problems.4.1 M ATRIX COMPLETIONThe nuclear norm has been introduced as a low rank prior originally for solving the matrix comple-tion problem. Thus, it is natural to evaluate its non-linear extensions on the same task. AssumingW2Rmnto be the input matrix and Za binary matrix specifying the availability of the observa-tions inW, Algorithm 1 can be used for recovering a complete matrix Swith the following choiceoff(W;Z;S ):f(W;Z;S ) =kZ(WS)k2F (11)whererepresents Hadamard product.To demonstrate the robustness of our algorithm for matrix completion problem, we choose 100training samples from the oil flow dataset described in section 3.2 and randomly remove the elementsfrom the data with varying range of probabilities to test the performance of the proposed algorithmagainst various baselines. Following the experimental setup as specified in Sanguinetti & Lawrence(2006), we repeat the experiments with 50 different samples of Z. We report the mean and standarddeviation of the root mean square reconstruction error for our method with the choice of = 0:1,alongside five different methods in Table 1. Our method significantly improves the performance ofmissing data completion compared to other robust extensions of KPCA Tipping & Bishop (1999);Sanguinetti & Lawrence (2006); Nguyen & De la Torre (2009), for every probability of missingdata.Although we restrict our experiments to least-squares cost functions, it is vital to restate here thatour framework could trivially incorporate robust functions like the L1norm instead of the Frobeniusnorm — as a robust data term f(W;Z;S )— to generalize algorithms like Robust PCA Wright et al.(2009) to their non-linear counterparts.4.2 K ERNEL NON -RIGID STRUCTURE FROM MOTION6Under review as a conference paper at ICLR 2017Figure 2: Non-linear dimensionality regular-isation improves NRSfM performance com-pared to its linear counterpart. Figure showsthe ground truth 3D structures in red wire-frameoverlaid with the structures estimated using: (a)proposed non-linear dimensionality regularizershown in blue dots and (b) corresponding lin-ear dimensionality regularizer (TNH) shown inblack crosses, for sample frames of CMU mo-cap sequence. Red circles represent the 3D pointsfor which the projections were known whereassquares annotated missing 2D observations. Seetext and Table 2 for details.Non-rigid structure from motion under orthographyis an ill-posed problem where the goal is to esti-mate the camera locations and 3D structure of a de-formable objects from a collection of 2D imageswhich are labeled with landmark correspondencesBregler et al. (2000). Assuming si(xj)2R3tobe the 3D location of point xjon the deformableobject in the ithimage, its orthographic projectionwi(xj)2R2can be written as wi(x) =Risi(xj),whereRi2R23is a orthographic projection ma-trix Bregler et al. (2000). Notice that as the objectdeforms, even with given camera poses, reconstruct-ing the sequence by least-squares reprojection errorminimization is an ill-posed problem. In their semi-nal work, Bregler et al. (2000) proposed to solve thisproblem with an additional assumption that the re-constructed shapes lie on a low-dimensional linearsubspace and can be parameterized as linear combi-nations of a relatively low number of basis shapes.NRSfM was then cast as the low-rank factorizationproblem of estimating these basis shapes and corre-sponding coefficients.Recent work, like Dai et al. (2014); Garg et al.(2013a) have shown that the trace norm regularizercan be used as a convex envelope of the low-rankprior to robustly address ill-posed nature of the prob-lem. A good solution to NRSfM can be achieved byoptimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F (12)whereSis the shape matrix whose columns are 3Ndimensional vectors storing the 3D coordinatesSi(xj)of the shapes and Zi(xj)is a binary variable indicating if projection of point xjis availablein the image i.Assuming the projection matrices to be fixed, this problem is convex and can be exactly solvedwith standard convex optimization methods. Additionally, if the 2D projections wi(xj)are noisefree, optimizing (12) with very small corresponds to selecting the the solution — out of the manysolutions — with (almost) zero projection error, which has minimum trace norm Dai et al. (2014).Thus henceforth, optimization of (12) is referred as the trace norm heuristics (TNH). We solve thisproblem with a first order primal-dual variant of the algorithm given in Garg et al. (2013a), whichcan handle missing data. The algorithm is detailed and compared favorably with the state of the artNRSfM approaches (based on linear dimensionality regularization) Appendix C.A simple kernel extension of the above optimization problem is:minS;Rk(S)k+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2F| {z }f(W;Z;R;S )(13)where (S)is the non-linear mapping of Sto the feature space using an RBF kernel.With fixed projection matrices R, (13) is of the general form (2), for which the local optima can befound using Algorithm 1.7Under review as a conference paper at ICLR 2017Table 2: 3D reconstruction errors for linear and non-linear dimensionality regularization with ground truthcamera poses. Column 1 and 4 gives gives error for TNH while column (2-3) and (5-6) gives the correspondingerror for proposed method with different width of RBF kernel. Row 5 reports the mean error over 4 sequences.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Lineardmaxdmed dmaxdmedDrink 0.0227 0.0114 0.0083 0.0313 0.0248 0.0229Pickup 0.0487 0.0312 0.0279 0.0936 0.0709 0.0658Yoga 0.0344 0.0257 0.0276 0.0828 0.0611 0.0612Stretch 0.0418 0.0286 0.0271 0.0911 0.0694 0.0705Mean 0.0369 0.0242 0.0227 0.0747 0.0565 0.05514.2.1 R ESULTS ON THE CMU DATASETWe use a sub-sampled version of CMU mocap dataset by selecting every 10thframe of the smoothlydeforming human body consisting 41 mocap points used in Dai et al. (2014).4In our experiments we use ground truth camera projection matrices to compare our algorithm againstTNH. The advantage of this setup is that with ground-truth rotation and no noise, we can avoid themodel selection (finding optimal regularization strength ) by setting it low enough. We run theTNH with= 107and use this reconstruction as initialization for Algorithm 1. For the proposedmethod, we set = 104and use following RBF kernel width selection approach:Maximum distance criterion ( dmax): we set the maximum distance in the feature space tobe3. Thus, the kernel matrix entry corresponding to the shape pairs obtained by TNHwith maximum Euclidean distance becomes e9=2.Median distance criterion ( dmed): the kernel matrix entry corresponding to the medianeuclidean distance is set to 0.5.Following the standard protocol in Dai et al. (2014); Akhter et al. (2009), we quantify the recon-struction results with normalized mean 3D errors e3D=1FNPiPjeij, whereeijis the euclideandistance of a reconstructed point jin frameifrom the ground truth, is the mean of standard devi-ation for 3 coordinates for the ground truth 3D structures, and F;N are number of input images andnumber of points reconstructed.Table 2 shows the results of the TNH and non-linear dimensionality regularization based methodsusing the experimental setup explained above, both without missing data and after randomly remov-ing 50% of the image measurements. Our method consistently beats the TNH baseline and improvesthe mean reconstruction error by 40% with full data and by 25% when used with 50% miss-ing data. Figure 2 shows qualitative comparison of the obtained 3D reconstruction using TNH andproposed non-lienar dimensionality regularization technique for some sample frames from varioussequences. We refer readers to Appendix B for results with simultaneous reconstruction pose opti-mization.5 C ONCLUSIONIn this paper we have introduced a novel non-linear dimensionality regularizer which can be incor-porated into an energy minimization framework, while solving an inverse problem. The proposedalgorithm for penalizing the rank of the data in the feature space has been shown to be robust to noiseand missing observations. We have picked NRSfM as an application to substantiate our argumentsand have shown that despite missing data and model noise (such as erroneous camera poses) ouralgorithm significantly outperforms state-of-the-art linear counterparts.Although our algorithm currently uses slow solvers such as the penalty method and is not directlyscalable to very large problems like dense non-rigid reconstruction, we are actively consideringalternatives to overcome these limitations. An extension to estimate pre-images with a problem-4Since our main goal is to validate the usefulness of the proposed non-linear dimensionality regularizer, weopt for a reduced size dataset for more rapid and flexible evaluation.8Under review as a conference paper at ICLR 2017specific loss function is possible, and this will be useful for online inference with pre-learned low-dimensional manifolds.Given the success of non-linear dimensionality reduction in modeling real data and overwhelminguse of the linear dimensionality regularizers in solving real world problems, we expect that pro-posed non-linear dimensionality regularizer will be applicable to a wide variety of unsupervisedinference problems: recommender systems; 3D reconstruction; denoising; shape prior based objectsegmentation; and tracking are all possible applications.REFERENCESTrine Julie Abrahamsen and Lars Kai Hansen. Input space regularization stabilizes pre-images forkernel pca de-noising. In EEE International Workshop on Machine Learning for Signal Process-ing, pp. 1–6, 2009.Ijaz Akhter, Yaser Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion intrajectory space. In Advances in neural information processing systems , pp. 41–48, 2009.Gokhan H Bakir, Jason Weston, and Bernhard Sch ̈olkopf. Learning to find pre-images. Advances inneural information processing systems , 16(7):449–456, 2004.Christopher M Bishop and Gwilym D James. Analysis of multiphase flows using dual-energygamma densitometry and neural networks. Nuclear Instruments and Methods in Physics ResearchSection A: Accelerators, Spectrometers, Detectors and Associated Equipment , 327(2):580–593,1993.V olker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In 26th annualconference on Computer graphics and interactive techniques , pp. 187–194, 1999.Christoph Bregler, Aaron Hertzmann, and Henning Biermann. Recovering non-rigid 3d shape fromimage streams. In IEEE Conference on Computer Vision and Pattern Recognition , pp. 690–696,2000.R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinearfactorization approaches for low-rank matrix decomposition. In International Conference onComputer Vision (ICCV) , 2013.Emmanuel J Cand `es and Benjamin Recht. Exact matrix completion via convex optimization. Foun-dations of Computational mathematics , 9(6):717–772, 2009.Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems withapplications to imaging. Journal of Mathematical Imaging and Vision , 40(1):120–145, 2011.Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEETransactions on pattern analysis and machine intelligence , 23(6):681–685, 2001.Yuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure-from-motion factorization. International Journal of Computer Vision , 107(2):101–122, 2014.Amaury Dame, Victor Adrian Prisacariu, Carl Yuheng Ren, and Ian Reid. Dense reconstructionusing 3d object shape priors. In Computer Vision and Pattern Recognition , pp. 1288–1295. IEEE,2013.Rong Du, Cailian Chen, Zhiyi Zhou, and Xinping Guan. L 1/2-based iterative matrix completion fordata transmission in lossy environment. In Computer Communications Workshops (INFOCOMWKSHPS), 2013 IEEE Conference on , pp. 65–66. IEEE, 2013.Maryam Fazel. Matrix rank minimization with applications . PhD thesis, Stanford University, 2002.Ravi Garg, Anastasios Roussos, and Lourdes Agapito. Dense variational reconstruction of non-rigidsurfaces from monocular video. In Computer Vision and Pattern Recognition , pp. 1272–1279,2013a.9Under review as a conference paper at ICLR 2017Ravi Garg, Anastasios Roussos, and Lourdes Agapito. A variational approach to video registrationwith subspace constraints. International journal of computer vision , 104(3):286–314, 2013b.Andreas Geiger, Raquel Urtasun, and Trevor Darrell. Rank priors for continuous non-linear dimen-sionality reduction. In Computer Vision and Pattern Recognition , pp. 880–887. IEEE, 2009.Paulo FU Gotardo and Aleix M Martinez. Kernel non-rigid structure from motion. In IEEE Inter-national Conference on Computer Vision , pp. 802–809, 2011a.Paulo FU Gotardo and Aleix M Martinez. Non-rigid structure from motion with complementaryrank-3 spaces. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on ,pp. 3065–3072. IEEE, 2011b.Dong Huang, Ricardo Silveira Cabral, and Fernando De la Torre. Robust regression. In EuropeanConference on Computer Vision (ECCV) , 2012.Ian Jolliffe. Principal component analysis . Wiley Online Library, 2002.JT-Y Kwok and Ivor W Tsang. The pre-image problem in kernel methods. IEEE Transactions onNeural Networks, , 15(6):1517–1525, 2004.Neil D Lawrence. Probabilistic non-linear principal component analysis with gaussian process latentvariable models. The Journal of Machine Learning Research , 6:1783–1816, 2005.Sebastian Mika, Bernhard Sch ̈olkopf, Alex J Smola, Klaus-Robert M ̈uller, Matthias Scholz, andGunnar R ̈atsch. Kernel pca and de-noising in feature spaces. In NIPS , volume 4, pp. 7, 1998.Minh Hoai Nguyen and Fernando De la Torre. Robust kernel principal component analysis. InAdvances in Neural Information Processing Systems . 2009.Jorge Nocedal and Stephen J. Wright. Numerical optimization . Springer, New York, 2006.Bryan Poling, Gilad Lerman, and Arthur Szlam. Better feature tracking through subspace con-straints. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on , pp.3454–3461. IEEE, 2014.Victor Adrian Prisacariu and Ian Reid. Nonlinear shape manifolds as shape priors in level set seg-mentation and tracking. In Computer Vision and Pattern Recognition , pp. 2185–2192. IEEE,2011.Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linearmatrix equations via nuclear norm minimization. SIAM review , 52(3):471–501, 2010.Ralph Tyrrell Rockafellar. Conjugate duality and optimization , volume 14. SIAM, 1974.Guido Sanguinetti and Neil D Lawrence. Missing data in kernel pca. In Machine Learning: ECML2006 , pp. 751–758. Springer, 2006.Bernhard Sch ̈olkopf, Alexander Smola, and Klaus-Robert M ̈uller. Nonlinear component analysis asa kernel eigenvalue problem. Neural computation , 10(5):1299–1319, 1998.Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journalof the Royal Statistical Society: Series B (Statistical Methodology) , 61(3):611–622, 1999.John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal componentanalysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances inNeural Information Processing Systems , pp. 2080–2088. 2009.Qian Zhao, DeYu Meng, and ZongBen Xu. Robust sparse principal component analysis. ScienceChina Information Sciences , 57(9):1–14, 2014.Xiaowei Zhou, Can Yang, Hongyu Zhao, and Weichuan Yu. Low-rank modeling and its applicationsin image analysis. ACM Computing Surveys (CSUR) , 47(2):36, 2014.Xu Zongben, Chang Xiangyu, Xu Fengmin, and Zhang Hai. L1/2 regularization: a thresholdingrepresentation theory and a fast solver. IEEE Transactions on neural networks and learningsystems , 23(7):1013–1027, 2012.10Under review as a conference paper at ICLR 2017A P ROOF OF THEOREM 3.1Proof. We will prove theorem 1 by first establishing a lower bound for (8) and subsequently showing that thislower bound is obtained at Lgiven by (10). The rotational invariance of the entering norms allows us to write(8) as:min2Dn;WTW=I2jjW2WTjj2F+jjjj: (14)Expanding (14) we obtain2min;Wtr22 trW2WT+ tr4+2nXi=1i (15)=2min;WnXi=12i+4i+2i2nXi=1nXj=1w2ij2ji (16)2minnXi=12i22ii+4i+2i(17)=2nXi=1mini02i22ii+4i+2i(18)The inequality in (17) follows directly by applying H ̈older’s inequality to (16) and using the property that thecolumn vectors wiare unitary.Next, withL= UTin (8) we have2jjALTLjj2F+jjLjj=2jj2jj2F+jjjj=nXi=12(i2i)2+i: (19)Finally, since the subproblems in (18) are separable in i, its minimizer must be KKT-points of the individualsubproblems. As the constraints are simple non-negativity constraints, these KKT points are either (positive)stationary points of the objective functions or 0. It is simple to verify that the stationary points are given by theroots of the cubic function pi;=2. Hence it follows that there exists a isuch that22i22ii+4i+2i2(i2i)2+i; (20)8i0, which completes the proof.A.1 V ALIDATING THE CLOSED FORM SOLUTIONGiven the relaxations proposed in Section 2, our assertion that the novel trace regularization basednon-linear dimensionality reduction is robust need to be substantiated. To that end, we evaluate ourclosed-form solution of Algorithm 2 on the standard oil flow dataset introduced in Bishop & James(1993).This dataset comprises 1000 training and 1000 testing data samples, each of which is of 12 dimen-sions and categorized into one of three different classes. We add zero mean Gaussian noise withvarianceto the training data5and recover the low-dimensional manifold for this noisy trainingdataSwith KPCA and contrast this with the results from Algorithm 2. An inverse width of theGaussian kernel = 0:075is used for all the experiments on the oil flow dataset.It is important to note that in this experiment, we only estimate the principal components (and theirvariances) that explain the estimated non-linear manifold, i.e. matrix Cby Algorithm 2, withoutreconstructing the denoised version of the corrupted data samples.Both KPCA and our solution require model selection (choice of rank and respectively) whichis beyond the scope of this paper. Here we resort to evaluate the performance of both methodsunder different parameters settings. To quantify the accuracy of the recovered manifold ( C) we usefollowing criteria:5Note that our formulation assumes Gaussian noise in K(S)where as for this evaluation we add noise to Sdirectly.11Under review as a conference paper at ICLR 2017Table 3: Robust dimensionality reduction accuracy by KPCA versus our closed-form solution on the full oilflow dataset. Columns from left to right represent: (1) standard deviation of the noise in training samples (2-3)Error in the estimated low-dimensional kernel matrix by (2) KPCA and (3) our closed-form solution, (4-5)Nearest neighbor classification error of test data using (4) KPCA and (5) our closed-form solution respectively.Manifold Error Classification ErrorSTD KPCA Our CFS KPCA Our CFS.2 0.1099 0.1068 9.60% 9.60%.3 0.2298 0.2184 19.90% 15.70 %.4 0.3522 0.3339 40.10% 22.20 %0 2 4 6 810 12 14 160.10.150.20.250.30.350.4Rank of kernel matrixManifold error KPCA,σ=.2Ours,σ=.2KPCA,σ=.3Ours,σ=.3KPCA,σ=.4Ours,σ=.4Figure 3: Performance comparison between KPCA and our Robust closed-form solution with dimensionalityregularization on oil flow dataset with additive Gaussian noise of standard deviation . Plots show the normal-ized kernel matrix errors with different rank of the model. Kernel PCA results are shown in dotted line withdiamond while ours are with solid line with a star. Bar-plot show the worst and the best errors obtained by ourmethod for a single rank of recovered kernel matrix.Manifold Error : A good manifold should preserve maximum variance of the data — i.e.it should be able to generate a denoised version K(Sest) =CTCof the noisy kernelmatrixK(S). We define the manifold estimation error as kK(Sest)K(SGT)k2F, whereK(SGT)is the kernel matrix derived using noise free data. Figure 3 shows the manifoldestimation error for KPCA and our method for different rank and parameter respectively.6Classification error: The accuracy of a non-linear manifold is often also tested by the near-est neighbor classification accuracy. We select the estimated manifold which gives mini-mum Manifold Error for both the methods and report 1NN classification error (percentageof misclassified example) of the 1000 test points by projecting them onto estimated mani-folds.B K ERNEL NRS FMWITH CAMERA POSE ESTIMATIONExtended from section 4.2Table 4 shows the reconstruction performance on a more realistic experimental setup, with the mod-ification that the camera projection matrices are initialized with rigid factorization and were refinedwith the shapes by optimizing (2). To solve NRSfM problem with unknown projection matrices,we parameterize each Riwith quaternions and alternate between refining the 3D shapes Sand pro-jection matrices Rusing LM. The regularization strength was selected for the TNH method bygolden section search and parabolic interpolation for every test case independently. This ensures thebest possible performance for the baseline. For our proposed approach was kept to 104for allsequences for both missing data and full data NRSfM. This experimental protocol somewhat disad-vantages the non-linear method, since its performance can be further improved by a judicious choiceof the regularization strength.6Errors from non-noisy kernel matrix can be replaced by cross validating the entries of the kernel matrix formodel selection for more realistic experiment.12Under review as a conference paper at ICLR 2017Table 4: 3D reconstruction errors for linear and non-linear dimensionality regularization with noisy camerapose initialization from rigid factorization and refined in alternation with shape. The format is same as Table 2.DatasetNo Missing Data 50% Missing DataLinear Non-Linear Linear Non-Linear== 104== 104dmaxdmed dmaxdmedDrink 0.0947 0.0926 0.0906 0.0957 0.0942 0.0937Pickup 0.1282 0.1071 0.1059 0.1598 0.1354 0.1339Yoga 0.2912 0.2683 0.2639 0.2821 0.2455 0.2457Stretch 0.1094 0.1043 0.1031 0.1398 0.1459 0.1484Mean 0.1559 0.1430 0.1409 0.1694 0.1552 0.1554However our purpose is primarily to show that the non-linear method adds value even without time-consuming per-sequence tuning. To that end, note that despite large errors in the camera pose esti-mations by TNH and 50% missing measurements, the proposed method shows significant ( 10%)improvements in terms of reconstruction errors, proving our broader claims that non-linear repre-sentations are better suited for modeling real data, and that our robust dimensionality regularizer canimprove inference for ill-posed problems.As suggested by Dai et al. (2014), robust camera pose initialization is beneficial for the structure es-timation. We have used rigid factorization for initializing camera poses here but this can be triviallychanged. We hope that further improvements can be made by choosing better kernel functions, withcross validation based model selection (value of ) and with a more appropriate tuning of kernelwidth. Selecting a suitable kernel and its parameters is crucial for success of kernelized algorithms.It becomes more challenging when no training data is available. We hope to explore other kernelfunctions and parameter selection criteria in our future work.We would also like to contrast our work with Gotardo & Martinez (2011a), which is the only workwe are aware of where non-linear dimensionality reduction is attempted for NRSfM. While esti-mating the shapes lying on a two dimensional non-linear manifold, Gotardo & Martinez (2011a)additionally assumes smooth 3D trajectories (parametrized with a low frequency DCT basis) and apre-defined hard linear rank constraint on 3D shapes. The method relies on sparse approximation ofthe kernel matrix as a proxy for dimensionality reduction. The reported results were hard to replicateunder our experimental setup for a fair comparison due to non-smooth deformations. However, incontrast to Gotardo & Martinez (2011a), our algorithm is applicable in a more general setup, canbe modified to incorporate smoothness priors and robust data terms but more importantly, is flexibleto integrate with a wide range of energy minimization formulations leading to a larger applicabilitybeyond NRSfM.C TNH ALGORITHM FOR NRS FMIn section 4.2, we have compared the proposed non-linear dimensionality reduction prior against avariant of Garg et al. (2013a) which handles missing data by optimizing:minS;RkSk+FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2(21)This problem is convex in Sgiven noise free projection matrix Ri’s but non-differentiable. Tooptimize (21), we first rewrite it in its primal-dual form by dualizing the trace norm7:maxQminS;R <S;Q> +FXi=1NXj=1Zi(xj)kwi(xj)Risi(xj)k2s:t:kQks1 (22)whereQ2RXNstores the dual variables to Sandk:ksrepresent spectral norm (highest eigen-value) of a matrix.7For more details on primal dual formulation and dual norm of the trace norm see Rockafellar (1974); Rechtet al. (2010); Chambolle & Pock (2011).13Under review as a conference paper at ICLR 2017Algorithm 3: Trace norm Heuristics.Input : Initial estimates S0;R0ofSandR.Output : Low-dimensional Sand camera poses R.Parameters : Regularization strength , measurements Wand binary mask Z.-S=S0; R=R0;// set iteration count step size and duals Q-= 0;-= 1=;-Q= 0;while not converged do// projection matrix estimation- FixS;Q and refineRifor every image iwith LM;// steepest descend update for Sijfor each point xjand each frame ifori= 1toFdoforj= 1toNdo-S+1ij=I22+(ZijRTiRi)1(SijQij+RTi(Zijwij));// accelerated steepest ascend update for Q-Q=Q+(2Sn+1Sn);-UDVT= singular value decomposition of Q;-D= min(D;1);-Q+1=UDVT;// Go to next iteration-=+ 1Table 5: 3D reconstruction errors for different NRSfM approaches and our TNH Algorithm given ground truthcamera projection matrices. Results for all the methods (except TNH) are taken from Dai et al. (2014).Dataset PTAAkhter et al. (2009) CSF2Gotardo & Martinez (2011b) BMMDai et al. (2014) TNHDrink 0.0229 0.0215 0.0238 0.0237Pick-up 0.0992 0.0814 0.0497 0.0482Yoga 0.0580 0.0371 0.0334 0.0333Stretch 0.0822 0.0442 0.0456 0.0431We choose quaternions to perametrize the 23camera matrices Rito satisfy orthonormality con-straints as done in Garg et al. (2013a) and optimize the saddle point problem (22) using alternation.In particular, for a single iteration: (i) we optimize the camera poses Ri’s using LM, (ii) take asteepest descend step for updating Sand (ii) a steepest ascend step for updating Qwhich is fol-lowed by projecting its spectral norm to unit ball. Given ground truth camera matrices ( withoutstep (i)), alternation (ii-iii) can be shown to reach global minima of (22). Algorithm 3 outlines TNHalgorithm.As the main manuscript uses NRSfM only as a practical application of our non-linear dimension-ality reduction prior, we have restricted our NRSfM experiments to only compare the proposedmethod against its linear counterpart. For the timely evaluation, the reported experiments we con-ducted on sub-sampled CMU mocap dataset. Here, we supplement the arguments presented in themain manuscript by favorably comparing the linear dimensionality reduction based NRSfM algo-rithm(TNH) to other NRSfM methods on full length CMU mocap sequences.14
rkpg7VPNl
rJ0JwFcex
ICLR.cc/2017/conference/-/paper498/official/review
{"title": "Strong ideas for an important problem", "rating": "7: Good paper, accept", "review": "This paper sets out to tackle the program synthesis problem: given a set of input/output pairs discover the program that generated them. The authors propose a bipartite model, with one component that is a generative model of tree-structured programs and the other component an input/output pair encoder for conditioning. They consider applying many variants of this basic model to a FlashFill DSL. The experiments explore a practical dataset and achieve fine numbers. The range of models considered, carefulness of the exposition, and basic experimental setup make this a valuable paper for an important area of research. I have a few questions, which I think would strengthen the paper, but think it's worth accepting as is.\n\nQuestions/Comments:\n\n- The dataset is a good choice, because it is simple and easy to understand. What is the effect of the \"rule based strategy\" for computing well formed input strings?\n\n- Clarify what \"backtracking search\" is? I assume it is the same as trying to generate the latent function? \n\n- In general describing the accuracy as you increase the sample size could be summarize simply by reporting the log-probability of the latent function. Perhaps it's worth reporting that? Not sure if I missed something.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neuro-Symbolic Program Synthesis
["Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli"]
Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.
["Deep learning", "Structured prediction"]
https://openreview.net/forum?id=rJ0JwFcex
https://openreview.net/pdf?id=rJ0JwFcex
https://openreview.net/forum?id=rJ0JwFcex&noteId=rkpg7VPNl
Published as a conference paper at ICLR 2017NEURO -SYMBOLIC PROGRAM SYNTHESISEmilio Parisotto1;2, Abdel-rahman Mohamed1, Rishabh Singh1,Lihong Li1, Dengyong Zhou1, Pushmeet Kohli11Microsoft Research, USA2Carnegie Mellon University, USAeparisot@andrew.cmu.edu , fasamir,risin,lihongli,denzho,pkohli g@microsoft.comABSTRACTRecent years have seen the proposal of a number of neural architectures for theproblem of Program Induction. Given a set of input-output examples, these ar-chitectures are able to learn mappings that generalize to new test inputs. Whileachieving impressive results, these approaches have a number of important limi-tations: (a) they are computationally expensive and hard to train, (b) a model hasto be trained for each task (program) separately, and (c) it is hard to interpret orverify the correctness of the learnt mapping (as it is defined by a neural network).In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis ,to overcome the above-mentioned problems. Once trained, our approach can au-tomatically construct computer programs in a domain-specific language that areconsistent with a set of input-output examples provided at test time. Our methodis based on two novel neural modules. The first module, called the cross corre-lation I/O network, given a set of input-output examples, produces a continuousrepresentation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representationof the examples, synthesizes a program by incrementally expanding partial pro-grams. We demonstrate the effectiveness of our approach by applying it to therich and complex domain of regular expression based string transformations. Ex-periments show that the R3NN model is not only able to construct programs fromnew input-output examples, but it is also able to construct new programs for tasksthat it had never observed before during training.1 I NTRODUCTIONThe act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demon-stration of the reasoning abilities of the human mind. Expectedly, Program Induction is consideredas one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progresson deep learning has led to the proposal of a number of promising neural architectures for this prob-lem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graveset al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or commondata structures used in many algorithms (stack) (Joulin & Mikolov, 2015). A common thread in thisline of work is to specify the atomic operations of the network in some differentiable form, allowingefficient end-to-end training of a neural controller, or to use reinforcement learning to make hardchoices about which operation to perform. While these results are impressive, these approacheshave a number of important limitations: (a) they are computationally expensive and hard to train, (b)a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verifythe correctness of the learnt mapping (as it is defined by a neural network). While some recentlyproposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunel et al., 2016)do learn interpretable programs, they still need to learn a separate neural network model for eachindividual task.Motivated by the need for model interpretability and scalability to multiple tasks, we address theproblem of Program Synthesis . Program Synthesis, the problem of automatically constructing pro-grams that are consistent with a given specification, has long been a subject of research in ComputerScience (Biermann, 1978; Summers, 1977). This interest has been reinvigorated in recent years on1Published as a conference paper at ICLR 2017the back of the development of methods for learning programs in various domains, ranging fromlow-level bit manipulation code (Solar-Lezama et al., 2005) to data structure manipulations (Singh& Solar-Lezama, 2011) and regular expression based string transformations (Gulwani, 2011).Most of the recently proposed methods for program synthesis operate by searching the space ofprograms in a Domain-Specific Language (DSL) instead of arbitrary Turing-complete languages.This hypothesis space of possible programs is huge (potentially infinite) and searching over it is achallenging problem. Several search techniques including enumerative (Udupa et al., 2013), stochas-tic (Schkufza et al., 2013), constraint-based (Solar-Lezama, 2008), and version-space algebra basedalgorithms (Gulwani et al., 2012) have been developed to search over the space of programs in theDSL, which support different kinds of specifications (examples, partial programs, natural languageetc.) and domains. These techniques not only require significant engineering and research effort todevelop carefully-designed heuristics for efficient search, but also have limited applicability and canonly synthesize programs of limited sizes and types.In this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) thatlearns to generate a program incrementally without the need for an explicit search. Once trained,NSPS can automatically construct computer programs that are consistent with any set of input-outputexamples provided at test time. Our method is based on two novel module neural architectures . Thefirst module, called the cross correlation I/O network, produces a continuous representation of anygiven set of input-output examples. The second module, the Recursive-Reverse-Recursive NeuralNetwork (R3NN), given the continuous representation of the input-output examples, synthesizes aprogram by incrementally expanding partial programs. R3NN employs a tree-based neural archi-tecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expandusing rules from a context-free grammar ( i.e., the DSL).We demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-expression-based syntactic string transformations, using a DSL based on the one used by Flash-Fill (Gulwani, 2011; Gulwani et al., 2012), a Programming-By-Example (PBE) system in MicrosoftExcel 2013. Given a few input-output examples of strings, the task is to synthesize a program builton regular expressions to perform the desired string transformation. An example task that can beexpressed in this DSL is shown in Figure 1, which also shows the DSL.Our evaluation shows that NSPS is not only able to construct programs for known tasks from newinput-output examples, but it is also able to construct completely new programs that it had not ob-served during training. Specifically, the proposed system is able to synthesize string transformationprograms for 63% of tasks that it had not observed at training time, and for 94% of tasks when100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238real-world FlashFill benchmarks.To summarize, the key contributions of our work are:A novel Neuro-Symbolic program synthesis technique to encode neural search over thespace of programs defined using a Domain-Specific Language (DSL).The R3NN model that encodes and expands partial programs in the DSL, where each nodehas a global representation of the program tree.A novel cross-correlation based neural architecture for learning continuous representationof sets of input-output examples.Evaluation of the NSPS approach on the complex domain of regular expression based stringtransformations.2 P ROBLEM DEFINITIONIn this section, we formally define the DSL-based program synthesis problem that we consider in thispaper. Given a DSL L, we want to automatically construct a synthesis algorithm Asuch that givena set of input-output example, f(i1;o1);;(in;on)g,Areturns a program P2Lthat conformsto the input-output examples, i.e.,8j: 1jnP(ij) =oj: (1)2Published as a conference paper at ICLR 2017Inputv Output1William Henry Charles Charles, W.2 Michael Johnson Johnson, M.3 Barack Rogers Rogers, B.4 Martha D. Saunders Saunders, M.5 Peter T Gates Gates, P.Stringe:= Concat( f1;;fn)Substringf:= ConstStr( s)jSubStr(v;pl;pr)Positionp:= (r;k;Dir)jConstPos(k)Direction Dir := StartjEndRegexr:=sjT1jTn(a) (b)Figure 1: An example FlashFill task for transforming names to lastname with initials of first name,and (b) The DSL for regular expression based string transformations.The syntax and semantics of the DSL for string transformations is shown in Figure 1(b) and Figure 8respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), andallows for a richer class of substring operations than FlashFill. A DSL program takes as input astringvand returns an output string o. The top-level string expression eis a concatenation of afinite list of substring expressions f1;;fn. A substring expression fcan either be a constantstringsor a substring expression, which is defined using two position logics pl(left) andpr(right).A position logic corresponds to a symbolic expression that evaluates to an index in the string. Aposition logic pcan either be a constant position kor a token match expression (r;k;Dir), whichdenotes the Start orEnd of thekthmatch of token rin input string v. A regex token can either be aconstant string sor one of 8 regular expression tokens: p(ProperCase), C(CAPS),l(lowercase), d(Digits),(Alphabets), n(Alphanumeric),^(StartOfString), and $ (EndOfString). The semanticsof the DSL programs is described in the appendix.A DSL program for the name transformation task shown in Figure 1(a) that is con-sistent with the examples is: Concat (f1;ConstStr(\, ") ;f2;ConstStr(\.") ), wheref1SubStr(v;(\ ";1;End);ConstPos(1))andf2SubStr(v;ConstPos(0) ;ConstPos(1)) . Theprogram concatenates the following 4 strings: i) substring between the end of last whitespace andend of string, ii) constant string “, ”, iii) first character of input string, and iv) constant string “.”.3 O VERVIEW OF OUR APPROACHWe now present an overview of our approach. Given a DSL L, we learn a generative model ofprograms in the DSL Lthat is conditioned on input-output examples to efficiently search for con-sistent programs. The workflow of our system is shown in Figure 2, which is trained end-to-endusing a large training set of programs in the DSL together with their corresponding input-outputexamples. To generate a large training set, we uniformly sample programs from the DSL and thenuse a rule-based strategy to compute well-formed input strings. Given a program P (sampled fromthe DSL), the rule-based strategy generates input strings for the program P ensuring that the pre-conditions of P are met (i.e. P doesn’t throw an exception on the input strings). It collects thepre-conditions of all Substring expressions present in the sampled program P and then generatesinputs conforming to them. For example, let’s assume the sampled program is SubStr (v,(CAPS , 2,Start ), (“ ”, 3, Start )), which extracts the substring between the start of 2ndcapital letter and startof3rdwhitespace. The rule-based strategy would ensure that all the generated input strings consistof at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters.The corresponding output strings are obtained by running the programs on the input strings.A DSL can be considered as a context-free grammar with a start symbol Sand a set of non-terminalswith corresponding expansion rules. The (partial) grammar derivations or trees correspond to (par-tial) programs. A na ̈ıve way to perform a search over the programs in a DSL is to start from the startsymbolSand then randomly choose non-terminals to expand with randomly chosen expansion rulesuntil reaching a derivation with only terminals. We, instead, learn a generative model over partialderivations in the DSL that assigns probabilities to different non-terminals in a partial derivation andcorresponding expansions to guide the search for complete derivations.3Published as a conference paper at ICLR 2017R3NNDSLR3NNI/O EncoderR3NN...DSLDSLProgram SamplerDSLInput Gen Rulesi1–o1i2–o2...ik–ok{p1i1–o1i2–o2...ik–ok{pji1–o1i2–o2...ik–ok{pn...pj,0pj,1pj,2pj...R3NNDSLR3NNI/O EncoderR3NN...DSLDSLLearnt programi1–o1i2–o2...ik–ok(a) Training Phase (b) Test PhaseFigure 2: An overview of the training and test workflow of our synthesis appraoch.Our generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode par-tial trees (derivations) in L, where each node in the partial tree encodes global information aboutevery other node in the tree. The model assigns a vector representation for every symbol and everyexpansion rule in the grammar. Given a partial tree, the model first assigns a vector representationto each leaf node, and then performs a recursive pass going up in the tree to assign a global treerepresentation to the root. It then performs a reverse-recursive pass starting from the root to assigna global tree representation to each node in the tree.The generative process is conditioned on a set of input-output examples to learn a program that isconsistent with this set of examples. We experiment with multiple input-output encoders includingan LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networksfor input and output strings in the examples, and a Cross Correlation encoder that computes the crosscorrelation between the LSTM tensor representations of input and output strings in the examples.This vector is then used as an additional input in the R3NN model to condition the generative model.4 T REE-STRUCTURED GENERATION MODELWe define a program t-steps into construction as a partial program tree (PPT) (see Figure 3 for avisual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule)nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf noderepresents a particular production rule of the DSL, where the number of children of the non-leafnode is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree(PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completedprogram under the DSL and can be executed. We define an expansion as the valid application ofa specific production rule (e !e op2 e) to a specific non-terminal leaf node within a PPT (leafwith symbol e). We refer to the specific production rule that an expansion is derived from as theexpansion type. It can be seen that if there exist two leaf nodes ( l1andl2) with the same symbolthen for every expansion specific to l1there exists an expansion specific to l2with the same type.4.1 R ECURSIVE -REVERSE -RECURSIVE NEURAL NETWORKIn order to define a generation model over PPTs, we need an efficient way of assigning probabilitiesto every valid expansion in the current PPT. A valid expansion has two components: first the pro-duction rule used, and second the position of the expanded leaf node relative to every other node inthe tree. To account for the first component, a separate distributed representation for each produc-tion rule is maintained. The second component is handled using an architecture where the forwardpropagation resembles belief propagation on trees, allowing a notion of global tree state at everynode within the tree. A given expansion probability is then calculated as being proportional to theinner product between the production rule representation and the global-tree representation of theleaf-level non-terminal node. We now describe the design of this architecture in more detail.The R3NN has the following parameters for the grammar described by a DSL (see Figure 3):1. For every symbol s2S, anMdimensional representation (s)2RM.2. For every production rule r2R, anMdimensional representation !(r)2RM.4Published as a conference paper at ICLR 2017(a) Recursive pass (b) Reverse-Recursive passFigure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NNwhere the input is the output of the previous recursive pass.3. For every production rule r2R, a deep neural network frwhich takes as input a vectorx2RQM, withQbeing the number of symbols on the RHS of the production rule r,and outputs a vector y2RM. Therefore, the production-rule network frtakes as input aconcatenation of the distributed representations of each of its RHS symbols and producesa distributed representation for the LHS symbol.4. For every production rule r2R, an additional deep neural network grwhich takes asinput a vector x02RMand outputs a vector y02RQM. We can think of gras a reverseproduction-rule network that takes as input a vector representation of the LHS and producesa concatenation of the distributed representations of each of the rule’s RHS symbols.LetEbe the set of all valid expansions in a PPT T, letLbe the current leaf nodes of TandNbethe current non-leaf (rule) nodes of T. LetS(l)be the symbol of leaf l2LandR(n)represent theproduction rule of non-leaf node n2N.4.1.1 G LOBAL TREE INFORMATION AT THE LEAVESTo compute the probability distribution over the set E, the R3NN first computes a distributed rep-resentation for each leaf node that contains global tree information. To accomplish this, for everyleaf nodel2Lin the tree we retrieve its distributed representation (S(l)). We now do a standardrecursive bottom-to-top, RHS !LHS pass on the network, by going up the tree and applying fR(n)for every non-leaf node n2Non its RHS node representations (see Figure 3(a)). These networksfR(n)produce a node representation which is input into the parent’s rule network and so on until wereach the root node.Once at the root node, we effectively have a fixed-dimensionality global tree representation (root)for the start symbol. The problem is that this representation has lost any notion of tree position. Tosolve this problem R3NN now does what is effectively a reverse-recursive pass which starts at theroot node with (root)as input and moves towards the leaf nodes (see Figure 3(b)).More concretely, we start with the root node representation (root)and use that as input into therule network gR(root)whereR(root)is the production rule that is applied to the start symbol inT. This produces a representation 0(c)for each RHS node cofR(root). Ifcis a non-leaf node,we iteratively apply this procedure to c,i.e., process0(c)usinggR(c)to get representations 0(cc)for every RHS node ccofR(c), etc. Ifcis a leaf node, we now have a leaf representation 0(c)which has an information path to (root)and thus to every other leaf node in the tree. Once thereverse-recursive process is complete, we now have a distributed representation 0(l)for every leafnodelwhich contains global tree information. While (l1)and(l2)could be equal for leaf nodeswhich have the same symbol type, 0(l1)and0(l2)will not be equal even if they have the samesymbol type because they are at different positions in the tree.5Published as a conference paper at ICLR 20174.1.2 E XPANSION PROBABILITIESGiven the global leaf representations 0(l), we can now straightforwardly acquire scores for eache2E. For expansion e, lete:rbe the expansion type (production rule r2Rthateapplies) andlete:lbe the leaf node lthate:ris applied to. ze=0(e:l)!(e:r)The score of an expansion iscalculated using ze=0(e:l)!(e:r). The probability of expansion eis simply the exponentiatednormalized sum over all scores: (e) =ezePe02Eeze0.An additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) toprocess the global leaf representations right before calculating the scores. To do this, we first orderthe global leaf representations sequentially from left-most leaf node to right-mode leaf node. Wethen treat each leaf node as a time step for a BLSTM to process. This provides a sort of skipconnection between leaf nodes, which potentially reduces the path length that information needs totravel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculationrather than the leaves themselves.The R3NN can be seen as an extension and combination of several previous tree-based models,which were mainly developed in the context of natural language processing (Le & Zuidema, 2014;Paulus et al., 2014; Irsoy & Cardie, 2013).5 C ONDITIONING WITH INPUT /OUTPUT EXAMPLESNow that we have defined a generation process over tree-structured programs, we need a way ofconditioning this generation process on a set of input/output examples. The set of input/outputexamples provide a nearly complete specification for the desired output program, and so a goodencoding of the examples is crucial to the success of our program generator. For the most part, thisexample encoding needs to be domain-specific, since different DSLs have different inputs (somemay operate over integers, some over strings, etc.). Therefore, in our case, we use an encodingadapted to the input-output strings that our DSL operates over. We also investigate different ways ofconditioning program search on the learnt example input-output encodings.5.1 E NCODING INPUT /OUTPUT EXAMPLESThere are two types of information that string manipulation programs need to extract from input-output examples: 1) constant strings, such as “ @domain.com ” or “ .”, which appear in all outputexamples; 2) substring indices in input where the index might be further defined by a regular expres-sion. These indices determine which parts of the input are also present in the output. To simplify theDSL, we assume that there is a fixed finite universe of possible constant strings that could appear inprograms. Therefore we focus on extracting the second type of information, the substring indices.In earlier hand-engineered systems such as FlashFill, this information was extracted from the input-output strings by running the Longest Common Substring algorithm, a dynamic programming algo-rithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runsLCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takesthe entire set of substring candidates and simply tries every possible regex and constant index thatcan be used at substring boundaries, exhaustively searching for the one which is the most “general”,where generality is specified by hand-engineered heuristics.In contrast to these previous methods, instead of hand-designing a complicated algorithm to extractregex-based substrings, we develop neural network based architectures that are capable of learning toextract and produce continuous representations of the likely regular expressions given I/O examples.5.1.1 B ASELINE LSTM ENCODEROur first I/O encoding network involves running two separate deep bidirectional LSTM networks forprocessing the input and the output string in each example pair. For each pair, it then concatenatesthe topmost hidden representation at every time step to produce a 4HT-dimensional feature vectorper I/O pair, where Tis the maximum string length for any input or output string, and His thetopmost LSTM hidden dimension.6Published as a conference paper at ICLR 2017We then concatenate the encoding vectors across all I/O pairs to get a vector representation of the en-tire I/O set. This encoding is conceptually straightforward and has very little prior knowledge aboutwhat operations are being performed over the strings, i.e., substring, constant, etc., which mightmake it difficult to discover substring indices, especially the ones based on regular expressions.5.1.2 C ROSS CORRELATION ENCODERTo help the model discover input substrings that are copied to the output, we designed an novel I/Oexample encoder to compute the cross correlation between each input and output example repre-sentation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to thisencoder. For each example pair, we first slide the output feature block over the input feature blockand compute the dot product between the respective position representation. Then, we sum over alloverlapping time steps. Features of all pairs are then concatenated to form a 2(T1)-dimensionalvector encoding for all example pairs. There are 2(T1)possible alignments in total betweeninput and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure 9.We also designed the following variants of this encoder.Diffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encoderexcept that instead of summing over overlapping time steps after the element-wise dot product, wesimply concatenate the vectors corresponding to all time steps, resulting in a final representation thatcontains 2(T1)Tfeatures for each example pair.LSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, insteadof doing an element-wise dot product, we run a bidirectional LSTM over the concatenated featureblocks of each alignment. We represent each alignment by the LSTM hidden representation of thefinal time step leading to a total of 2H2(T1)features for each example pair.Augmented Diffused Cross Correlation Encoder: For this encoder, the output of each characterposition of the Diffused Cross Correlation encoder is combined with the character embedding at thisposition, then a basic LSTM encoder is run over the combined features to extract a 4H-dimensionalvector for both the input and the output streams. The LSTM encoder output is then concatenatedwith the output of the Diffused Cross Correlation encoder forming a (4H+T(T1))-dimensionalfeature vector for each example pair.5.2 C ONDITIONING PROGRAM SEARCH ON EXAMPLE ENCODINGSOnce the I/O example encodings have been computed, we can use them to perform conditionalgeneration of the program tree using the R3NN model. There are a number of ways in which thePPT generation model can be conditioned using the I/O example encodings depending on where theI/O example information is inserted in the R3NN model. We investigated three locations to injectexample encodings:1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf,and then passed to a conditioning network before the bottom-up recursive pass over the programtree. The conditioning network can be either a multi-layer feedforward network, or a bidirectionalLSTM network running over tree leaves. Running an LSTM over tree leaves allows the model tolearn more about the relative position of each leaf node in the tree.2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to theupdated representation of each tree leaf and then fed to a conditioning network before computingthe expansion scores.3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated tothe example encodings and passed to a conditioning network. The updated root representation isthen used to drive the reverse-recursive pass.Empirically, pre-conditioning worked better than either root- or post- conditioning. In addition,conditioning at all 3 places simultaneously did not cause a significant improvement over justpre-conditioning. Therefore, for the experimental section, we report models which only use pre-conditioning.7Published as a conference paper at ICLR 20176 E XPERIMENTSIn order to evaluate and compare variants of the previously described models, we generate a datasetrandomly from the DSL. To do so, we first enumerate all possible programs under the DSL up toa specific number of instructions, which are then partitioned into training, validation and test sets.In order to have a tractable number of programs, we limited the maximum number of instructionsfor programs to be 13. Length 13 programs are important for this specific DSL because all largerprograms can be written as compositions of sub-programs of length at most 13. The semantics oflength 13 programs therefore constitute the “atoms” of this particular DSL.In testing our model, there are two different categories of generalization. The first is input/outputgeneralization, where we are given a new set of input/output examples as well as a program with aspecific tree that we have seen during training. This represents the model’s capacity to be appliedon new data. The second category is program generalization, where we are given both a previouslyunseen program tree in addition to unseen input/output examples. Therefore the model needs tohave a sufficient enough understanding of the semantics of the DSL that it can construct novelcombinations of operations. For all reported results, training sets correspond to the first type ofgeneralization since we have seen the program tree but not the input/output pairs. Test sets representthe second type of generalization, as they are trees which have not been seen before on input/outputpairs that have also not been seen before.In this section, we compare several different variants of our model. We first evaluate the effect ofeach of the previously described input/output encoders. We then evaluate the R3NN model against asimple recurrent model called io2seq, which is basically an LSTM that takes as input the input/outputconditioning vector and outputs a sequence of DSL symbols that represents a linearized programtree. Finally, we report the results of the best model on the length 13 training and testing sets, aswell as on a set of 238 benchmark functions.6.1 S ETUP AND HYPERPARAMETERS SETTINGSFor training the R3NN, two hyperparameters that were crucial for stabilizing training were the useof hyperbolic tangent activation functions in both R3NN (other activations such as ReLU moreconsistently diverged during our initial experiments) and cross-correlation I/O encoders and the useof minibatches of length 8. Additionally, for all results, the program tree generation is conditionedon a set of 10 input/output string pairs. We used ADAM (Kingma & Ba, 2014) to optimize thenetworks with a learning rate of 0.001. Network weights used the default torch initializations.Due to the difficulty of batching tree-based neural networks since each sample in a batch has apotentially different tree structure, we needed to do batching sequentially. Therefore for each mini-batch of size N, we accumulated the gradients for each sample. After all N sample gradients wereaccumulated, we updated the parameters and reset the accumulated gradients. Due to this sequentialprocessing, in order to train models in a reasonable time, we limited our batch sizes to between8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN,as online learning often caused the network to diverge.For each latent function and set of input/output examples that we test on, we report whether we hada success after sampling 100 functions from the model and testing all 100 to see if one of thesefunctions is equivalent to the latent function. Here we consider two functions to be equivalent withrespect to a specific input/output example set if the functions output the same strings when run onthe inputs. Under this definition, two functions can have a different set of operations but still beequivalent with respect to a specific input-output set.We restricted the maximum size of training programs to be 13 because of two computational consid-erations. As described earlier, one difficulty was in batching tree-based neural networks of differentstructure and the computational cost of batching increases with the increase in size of the programtrees. The second issue is that valid I/O strings for programs often grow with the program length,in the sense that for programs of length 40 a minimal valid I/O string will typically be much longerthan a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat(ConstStr \longstring") (Concat (ConstStr \longstring") (Concat (ConstStr \longstring")...))) , the valid output string would be \longstringlongstringlongstring..." which could be many8Published as a conference paper at ICLR 2017I/O Encoding Train TestLSTM 88% 88%Cross Correlation (CC) 67% 65%Diffused CC 89% 88%LSTM-sum CC 90% 91%Augmented diffused CC 91% 91%Table 1: The effect of different input/output encoders on accuracy. Each result used 100 samples.There is almost no generalization error in the results.Sampling Train Testio2seq 44% 42%Table 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples.hundreds of characters long. Because of limited GPU memory, the I/O encoder models can quicklyrun out of memory.6.2 E XAMPLE ENCODINGIn this section, we evaluate the effect of several different input/output example encoders. To controlfor the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generatethe program tree. Table 1 shows the performance of several of these input/output example encoders.We can see that the summed cross-correlation encoder did not perform well, which can be due tothe fact that the sum destroys positional information that might be useful for determining specificsubstring indices. The LSTM-sum and the augmented diffused cross-correlation models did thebest. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs withouthaving any prior knowledge explicitly built into the architecture. We use 100 samples for evaluatingthe Train and Test sets. The training performance is sometimes slightly lower because there areclose to 5 million training programs but we only look at less than 2 million of these programs duringtraining. We sample a subset of only 1000 training programs from the 5 million program set toreport the training results in the tables. The test sets also consist of 1000 programs.6.3 IO2SEQIn this section, we motivate the use of the R3NN by testing whether a simpler model can also beused to generate programs. The io2seq model is an LSTM whose initial hidden and cell statesare a function of the input/output encoding vector. The io2seq model then generates a linearizedtree of a program symbol-by-symbol. An example of what a linearized program tree looks like is(S(e(f(ConstStr \@") ConstStr )f)e)S, which represents the program tree that returns the constantstring “@”. Predicting a linearized tree using an LSTM was also done in the context of pars-ing (Vinyals et al., 2015). For the io2seq model, we used the LSTM-sum cross-correlation I/Oconditioning model.The results in Table 2 show that the performance of the io2seq model at 100 samples per latent testfunction is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for thatcould be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seqmodel has to predict the parentheses symbols that determine at which level of the tree a particularsymbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13programs, while the R3NN requires no more than 13.6.4 E FFECT OF SAMPLING MULTIPLE PROGRAMSFor the best R3NN model that we trained, we also evaluated the effect that a different number ofsamples per latent function had on performance. The results are shown in Table 3. The increase ofthe model’s performance as the sample size increases hints that the model has a notion of what typeof program satisfies a given I/O pair, but it might not be that certain about the details such as whichregex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets.9Published as a conference paper at ICLR 2017Sampling Train Test1-best 60% 63%1-sample 56% 57%10-sample 81% 79%50-sample 91% 89%100-sample 94% 94%300-sample 97% 97%Table 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosingthe expansion with highest probability at each step.303540455055601 2 3 4 5 6 7 8 9 10AccuracyNumber of I/O Examples to train the EncoderModel accuracy with increasing I/O examplesTrain TestFigure 4: The train and test accuracies for models trained with different number of input-outputexamples.6.5 E FFECT OF NUMBER OF INPUT -OUTPUT EXAMPLESWe evaluate the effect of varying the number of input-output examples used to train the Input-outputencoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown inFigure 4. As expected, the accuracy increases with increase in number of input-output examples,since more examples add more information to the encoder and constrain the space of consistentprograms in the DSL.6.6 F LASH FILLBENCHMARKSWe also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Mi-crosoft Excel team and online help-forums. These benchmarks involve string manipulation tasksdescribed using input-output examples. We evaluate two models – one with a cross correlation en-coder trained on 5 input-output examples and another trained on 10 input-output examples. Boththe models were trained on randomly sampled programs from the DSL upto size 13 with randomlygenerated input-output examples.The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shownin Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks forwhich our model was able to learn the program using 5 input-output examples using samples oftop-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Sincethe model was trained for programs upto size 13, it is not surprising that it is not able to solve tasksthat need larger program size. There are 110 FlashFill benchmarks that require programs upto size13, out of which the model is able to solve 82.7% of them.The effect of sampling multiple learnt programs instead of only top program is shown in Figure 5(b).With only 10 samples, the model can already learn about 13% of the benchmarks. We observea steady increase in performance upto about 2000 samples, after which we do not observe anysignificant improvement. Since there are more than 2 million programs in the DSL of length 11itself, the enumerative techniques with uniform search do not scale well (Alur et al., 2015).We also evaluate a model that is learnt with 10 input-output examples per benchmark. This modelcan only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarkscontained only 5 input-output examples for each task, to run the model that took 10 examples asinput, we duplicated the I/O examples. Our models are trained on the synthetic training dataset10Published as a conference paper at ICLR 2017051015202530354045504 7 9 10 11 13 15 17 19 24 25 27 30 31 37 50 59 63Number of BenchmarksSize of smallest programs for FlashFill BenchmarksNumber of FlashFill Benchmarks solvedTotal SolvedSampling Solved Benchmarks10 13%50 21%100 23%200 29%500 33%1000 34%2000 38%5000 38%(a) (b)Figure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the perfor-mance of our model, (b) The effect of sampling for trying top-k learnt programs.Inputv Output[CPT-00350 [CPT-00350][CPT-00340] [CPT-00340][CPT-114563] [CPT-114563][CPT-1AB02 [CPT-1AB02][CPT-00360 [CPT-00360]Inputv Output732606129 0x73430257526 0x43444004480 0x44371255254 0x37635272676 0x63Inputv OutputJohn Doyle John D.Matt Walters Matt W.Jody Foster Jody F.Angela Lindsay Angela L.Maria Schulte Maria S.(a) (b) (c)Figure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets,(b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial.that is generated uniformly from the DSL. Because of the discrepancy between the training datadistribution (uniform) and auxiliary task data distribution, the model with 10 input/output examplesmight not perform the best on the FlashFill benchmark distribution, even though it performs betteron the synthetic data distribution (on which it is trained) as shown in Figure 4.Our model is able to solve majority of FlashFill benchmarks that require learning programs withupto 3 Concat operations. We now describe a few of these benchmarks, also shown in Fig-ure 6. An Excel user wanted to clean a set of medical billing records by adding a missing “]”to medical codes as shown in Figure 6(a). Our system learns the following program given these5 input-output examples: Concat (SubStr (v,ConstPos (0),(d,-1,End)),ConstStr (“]”)). The pro-gram concatenates the substring between the start of the input string and the position of the lastdigit regular expression with the constant string “]”. Another task that required user to trans-form some numbers into a hex format is shown in Figure 6(b). Our system learns the followingprogram: Concat (ConstStr (“0x”), SubStr (v,ConstPos (0),ConstPos(2))). For some benchmarkswith long input strings, it is still able to learn regular expressions to extract the desired sub-string, e.g. it learns a program to extract “NancyF” from the string “123456789,freehafer ,drew,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999”.Our system is currently not able to learn programs for benchmarks that require 4 or more Con-catoperations. Two such benchmarks are shown in Figure 7. The task of combining names inFigure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Fig-ure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in trainingwith programs of larger size. There are also a few interesting benchmarks where the R3NN modelsgets very close to learning the desired program. For example, for the task “ Bill Gates ”!“Mr.Bill Gates ”, it learns a program that generates “ Mr.Bill Gates ” (without the whitespace), and forthe task “617-444-5454” !“(617) 444-5454”, it learns a program that generates the string “(617444-5454”.11Published as a conference paper at ICLR 2017Inputv Output1 John James Paul John, James, and Paul.2 Tom Mike Bill Tom, Mike, and Bill.3 Marie Nina John Marie, Nina, and John.4Reggie Anna Adam Reggie, Anna, and Adam.Inputv Output1(425) 221 6767 425-221-67672 206.225.1298 206-225-12983 617-224-9874 617-224-98744 425.118.9281 425-118-9281(a) (b)Figure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transform-ing phone numbers to consistent format.7 R ELATED WORKWe have seen a renewed interest in recent years in the area of Program Induction and Synthesis.In the machine learning community, a number of promising neural architectures have been pro-posed to perform program induction . These methods have employed architectures inspired fromcomputation modules (Turing Machines, RAM) (Graves et al., 2014; Kurach et al., 2015; Reed &de Freitas, 2015; Neelakantan et al., 2015) or common data structures such as stacks used in manyalgorithms (Joulin & Mikolov, 2015). These approaches represent the atomic operations of the net-work in a differentiable form, which allows for efficient end-to-end training of a neural controller.However, unlike our approach that learns comprehensible complete programs, many of these ap-proaches learn only the program behavior ( i.e., they produce desired outputs on new input data).Some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunelet al., 2016) do learn interpretable programs but these techniques require learning a separate neuralnetwork model for each individual task, which is undesirable in many synthesis settings where wewould like to learn programs in real-time for a large number of tasks. Liang et al. (2010) restrictthe problem space with a probabilistic context-free grammar and introduce a new representationof programs based on combinatory logic, which allows for sharing sub-programs across multipletasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructuresof programs. Our approach, instead, uses neural architectures to condition the search space of pro-grams, and does not require additional step of representing program space using combinatory logicfor allowing sharing.The DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al.,2015). It has been used for many applications including synthesizing low-level bitvector implemen-tations (Solar-Lezama et al., 2005), Excel macros for data manipulation (Gulwani, 2011; Gulwaniet al., 2012), superoptimization by finding smaller equivalent loop bodies (Schkufza et al., 2013),protocol synthesis from scenarios (Udupa et al., 2013), synthesis of loop-free programs (Gulwaniet al., 2011), and automated feedback generation for programming assignments (Singh et al., 2013).The synthesis techniques proposed in the literature generally employ various search techniques in-cluding enumeration with pruning, symbolic constraint solving, and stochastic search, while sup-porting different forms of specifications including input-output examples, partial programs, programinvariants, and reference implementation.In this paper, we consider input-output example based specification over the hypothesis space de-fined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gul-wani, 2011). The key difference between our approach over previous techniques is that our systemis trained completely in an end-to-end fashion, while previous techniques require significant manualeffort to design heuristics for efficient search. There is some work on guiding the program search us-ing learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textualfeatures of examples (Menon et al., 2013). Moreover, their DSL consists of composition of about100 high-level text transformation functions such as count anddedup , whereas our DSL consists oftree structured programs over richer regular expression based substring constructs.There is also a recent line of work on learning probabilistic models of code from a large number ofcode repositories ( big code ) (Raychev et al., 2015; Bielik et al., 2016; Hindle et al., 2016), whichare then used for applications such as auto-completion of partial programs, inference of variableand method names, program repair, etc. These language models typically capture only the syntactic12Published as a conference paper at ICLR 2017properties of code, unlike our approach that also tries to capture the semantics to learn the desiredprogram. The work by Maddison & Tarlow (2014) addresses the problem of learning structuredgenerative models of source code but both their model and application domain are different fromours. Piech et al. (2015) use an NPM-RNN model to embed program ASTs, where a subtree ofthe AST rooted at a node n is represented by a matrix obtained by combining representations ofthe children of node n and the embedding matrix of the node n itself (which corresponds to itsfunctional behavior). The forward pass in our R3NN architecture from leaf nodes to the root nodeis, at a high-level, similar, but we use a distributed representation for each grammar symbol thatleads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass toensure all nodes in the tree encode global information about other nodes in the tree. Finally, theR3NN network is then used to incrementally build a tree to synthesize a program.The R3NN model employed in our work is related to several tree and graph structured neural net-works present in the NLP literature (Le & Zuidema, 2014; Paulus et al., 2014; Irsoy & Cardie, 2013).The Inside-Outside Recursive Neural Network (Le & Zuidema, 2014) in particular is most similar tothe R3NN, where they generate a parse tree incrementally by using global leaf-level representationsto determine which expansions in the parse tree to take next.8 C ONCLUSIONWe have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to con-struct a program incrementally based on given input-output examples. To do so, a new neuralarchitecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand apartial program tree into a full program tree. Its effectiveness at example-based program synthesisis demonstrated, even when the program has not been seen during training.These promising results open up a number of interesting directions for future research. For example,we took a supervised-learning approach here, assuming availability of target programs during train-ing. In some scenarios, we may only have access to an oracle that returns the desired output givenan input. In this case, reinforcement learning is a promising framework for program synthesis.REFERENCESAlur, Rajeev, Bod ́ık, Rastislav, Dallal, Eric, Fisman, Dana, Garg, Pranav, Juniwal, Garvit, Kress-Gazit, Hadas, Madhusudan, P., Martin, Milo M. K., Raghothaman, Mukund, Saha, Shamwaditya,Seshia, Sanjit A., Singh, Rishabh, Solar-Lezama, Armando, Torlak, Emina, and Udupa, Ab-hishek. Syntax-guided synthesis. In Dependable Software Systems Engineering , pp. 1–25. 2015.Bielik, Pavol, Raychev, Veselin, and Vechev, Martin T. PHOG: probabilistic model for code. InICML , pp. 2933–2942, 2016.Biermann, Alan W. The inference of regular lisp programs from examples. IEEE transactions onSystems, Man, and Cybernetics , 8(8):585–600, 1978.Bunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan. Adap-tive neural compilation. CoRR , abs/1605.07969, 2016. URL http://arxiv.org/abs/1605.07969 .Gaunt, Alexander L, Brockschmidt, Marc, Singh, Rishabh, Kushman, Nate, Kohli, Pushmeet, Tay-lor, Jonathan, and Tarlow, Daniel. Terpret: A probabilistic programming language for programinduction. arXiv preprint arXiv:1608.04428 , 2016.Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Gulwani, Sumit. Automating string processing in spreadsheets using input-output examples. InPOPL , pp. 317–330, 2011.Gulwani, Sumit, Jha, Susmit, Tiwari, Ashish, and Venkatesan, Ramarathnam. Synthesis of loop-freeprograms. In PLDI , pp. 62–73, 2011.Gulwani, Sumit, Harris, William, and Singh, Rishabh. Spreadsheet data manipulation using exam-ples. Communications of the ACM , Aug 2012.13Published as a conference paper at ICLR 2017Hindle, Abram, Barr, Earl T., Gabel, Mark, Su, Zhendong, and Devanbu, Premkumar T. On thenaturalness of software. Commun. ACM , 59(5):122–131, 2016.Irsoy, Orzan and Cardie, Claire. Bidirectional recursive neural networks for token-level labelingwith structure. In NIPS Deep Learning Workshop , 2013.Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrentnets. In NIPS , pp. 190–198, 2015.Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. In ICLR , 2014.Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXivpreprint arXiv:1511.06392 , 2015.Le, Phong and Zuidema, Willem. The inside-outside recursive neural network model for dependencyparsing. In EMNLP , pp. 729–739, 2014.Liang, Percy, Jordan, Michael I., and Klein, Dan. Learning programs: A hierarchical Bayesianapproach. In ICML , pp. 639–646, 2010.Maddison, Chris J and Tarlow, Daniel. Structured generative models of natural source code. InICML , pp. 649–657, 2014.Menon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler W., and Kalai, Adam. Amachine learning framework for programming by example. In ICML , pp. 187–195, 2013.Neelakantan, Arvind, Le, Quoc V , and Sutskever, Ilya. Neural programmer: Inducing latent pro-grams with gradient descent. arXiv preprint arXiv:1511.04834 , 2015.Paulus, Romain, Socher, Richard, and Manning, Christopher D. Global belief recursive neuralnetworks. pp. 2888–2896, 2014.Piech, Chris, Huang, Jonathan, Nguyen, Andy, Phulsuksombati, Mike, Sahami, Mehran, andGuibas, Leonidas J. Learning program embeddings to propagate feedback on student code. InICML , pp. 1093–1102, 2015.Raychev, Veselin, Vechev, Martin T., and Krause, Andreas. Predicting program properties from ”bigcode”. In POPL , pp. 111–124, 2015.Reed, Scott and de Freitas, Nando. Neural programmer-interpreters. arXiv preprintarXiv:1511.06279 , 2015.Riedel, Sebastian, Bosnjak, Matko, and Rockt ̈aschel, Tim. Programming with a differentiable forthinterpreter. CoRR , abs/1605.06640, 2016. URL http://arxiv.org/abs/1605.06640 .Schkufza, Eric, Sharma, Rahul, and Aiken, Alex. Stochastic superoptimization. In ASPLOS , pp.305–316, 2013.Singh, Rishabh and Solar-Lezama, Armando. Synthesizing data structure manipulations from sto-ryboards. In SIGSOFT FSE , pp. 289–299, 2011.Singh, Rishabh, Gulwani, Sumit, and Solar-Lezama, Armando. Automated feedback generation forintroductory programming assignments. In PLDI , pp. 15–26, 2013.Solar-Lezama, Armando. Program Synthesis By Sketching . PhD thesis, EECS Dept., UC Berkeley,2008.Solar-Lezama, Armando, Rabbah, Rodric, Bodik, Rastislav, and Ebcioglu, Kemal. Programming bysketching for bit-streaming programs. In PLDI , 2005.Summers, Phillip D. A methodology for lisp program construction from examples. Journal of theACM (JACM) , 24(1):161–175, 1977.Udupa, Abhishek, Raghavan, Arun, Deshmukh, Jyotirmoy V ., Mador-Haim, Sela, Martin, MiloM. K., and Alur, Rajeev. TRANSIT: specifying protocols with concolic snippets. In PLDI , pp.287–296, 2013.14Published as a conference paper at ICLR 2017JConcat(f1;;fn)Kv= Concat( Jf1Kv;;JfnKv)JConstStr(s)Kv=sJSubStr(v;pl;pr)Kv=v[JplKv::JprKv]JConstPos(k)Kv=k>0?k: len(s) +kJ(r;k;Start) Kv=Start ofkthmatch of r in vfrom beginning (end if k<0)J(r;k;End) Kv=End ofkthmatch of r in vfrom beginning (end if k<0)Figure 8: The semantics of the DSL for string transformations.Figure 9: The cross correlation encoder to encode a single input-output example.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey.Grammar as a foreign language. In ICLR , 2015.A D OMAIN -SPECIFIC LANGUAGE FOR STRING TRANSFORMATIONSThe semantics of the DSL programs is shown in Figure 8. The semantics of a Concat expressionis to concatenate the results of recursively evaluating the constituent substring expressions fi. Thesemantics of ConstStr(s) is to simply return the constant string s. The semantics of a substringexpression is to first evaluate the two position logics plandprtop1andp2respectively, and thenreturn the substring corresponding to v[p1::p2]. We denote s[i::j]to denote the substring of stringsstarting at index i (inclusive) and ending at index j (exclusive), and len(s) denotes its length.The semantics of ConstPos(k) expression is to return kifk > 0or return len +k(ifk < 0).The semantics of position logic (r;k;Start) is to return the Start of kthmatch of r in vfrom thebeginning (if k>0) or from the end (if k<0).15
SJk8FtWVx
rJ0JwFcex
ICLR.cc/2017/conference/-/paper498/official/review
{"title": "Nice program synthesis approach to a practical Excel flash-fill like application", "rating": "8: Top 50% of accepted papers, clear accept", "review": "This paper proposes a model that is able to infer a program from input/output example pairs, focusing on a restricted domain-specific language that captures a fairly wide variety of string transformations, similar to that used by Flash Fill in Excel. The approach is to model successive \u201cextensions\u201d of a program tree conditioned on some embedding of the input/output pairs. Extension probabilities are computed as a function of leaf and production rule embeddings \u2014 one of the main contributions is the so-called \u201cRecursive-Reverse-Recursive Neural Net\u201d which computes a globally aware embedding of a leaf by doing something that looks like belief propagation on a tree (but training this operation in an end-to-end differentiable way).\n\nThere are many strong points about this paper. In contrast with some of the related work in the deep learning community, I can imagine this being used in an actual application in the near future. The R3NN idea is a good one and the authors motivate it quite well. Moreover, the authors have explored many variants of this model to understand what works well and what does not. Finally, the exposition is clear (even if it is a long paper), which made this paper a pleasure to read. Some weaknesses of this paper: the results are still not super accurate, perhaps because the model has only been trained on small programs but is being asked to infer programs that should be much longer. And it\u2019s unclear why the authors did not simply train on longer programs\u2026 It also seems that the number of I/O pairs is fixed? So if I had more I/O pairs, the model might not be able to use those additional pairs (and based on the experiments, more pairs can hurt\u2026). Overall however, I would certainly like to see this paper accepted at ICLR.\n\nOther miscellaneous comments:\n* Too many e\u2019s in the expansion probability expression \u2014 might be better just to write \u201cSoftmax\u201d.\n* There is a comment about adding a bidirectional LSTM to process the global leaf representations before calculating scores, but no details are given on how this is done (as far as I can see).\n* The authors claim that using hyperbolic tangent activation functions is important \u2014 I\u2019d be interested in some more discussion on this and why something like ReLU would not be good.\n* It\u2019s unclear to me how batching was done in this setting since each program has a different tree topology. More discussion on this would be appreciated. Related to this, it would be good to add details on optimization algorithm (SGD? Adagrad? Adam?), learning rate schedules and how weights were initialized. At the moment, the results are not particularly reproducible.\n* In Figure 6 (unsolved benchmarks), it would be great to add the program sizes for these harder examples (i.e., did the approach fail because these benchmarks require long programs? Or was it some other reason?)\n* There is a missing related work by Piech et al (Learning Program Embeddings\u2026) where the authors trained a recursive neural network (that matched abstract syntax trees for programs submitted to an online course) to predict program output (but did not synthesize programs).\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neuro-Symbolic Program Synthesis
["Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli"]
Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.
["Deep learning", "Structured prediction"]
https://openreview.net/forum?id=rJ0JwFcex
https://openreview.net/pdf?id=rJ0JwFcex
https://openreview.net/forum?id=rJ0JwFcex&noteId=SJk8FtWVx
Published as a conference paper at ICLR 2017NEURO -SYMBOLIC PROGRAM SYNTHESISEmilio Parisotto1;2, Abdel-rahman Mohamed1, Rishabh Singh1,Lihong Li1, Dengyong Zhou1, Pushmeet Kohli11Microsoft Research, USA2Carnegie Mellon University, USAeparisot@andrew.cmu.edu , fasamir,risin,lihongli,denzho,pkohli g@microsoft.comABSTRACTRecent years have seen the proposal of a number of neural architectures for theproblem of Program Induction. Given a set of input-output examples, these ar-chitectures are able to learn mappings that generalize to new test inputs. Whileachieving impressive results, these approaches have a number of important limi-tations: (a) they are computationally expensive and hard to train, (b) a model hasto be trained for each task (program) separately, and (c) it is hard to interpret orverify the correctness of the learnt mapping (as it is defined by a neural network).In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis ,to overcome the above-mentioned problems. Once trained, our approach can au-tomatically construct computer programs in a domain-specific language that areconsistent with a set of input-output examples provided at test time. Our methodis based on two novel neural modules. The first module, called the cross corre-lation I/O network, given a set of input-output examples, produces a continuousrepresentation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representationof the examples, synthesizes a program by incrementally expanding partial pro-grams. We demonstrate the effectiveness of our approach by applying it to therich and complex domain of regular expression based string transformations. Ex-periments show that the R3NN model is not only able to construct programs fromnew input-output examples, but it is also able to construct new programs for tasksthat it had never observed before during training.1 I NTRODUCTIONThe act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demon-stration of the reasoning abilities of the human mind. Expectedly, Program Induction is consideredas one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progresson deep learning has led to the proposal of a number of promising neural architectures for this prob-lem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graveset al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or commondata structures used in many algorithms (stack) (Joulin & Mikolov, 2015). A common thread in thisline of work is to specify the atomic operations of the network in some differentiable form, allowingefficient end-to-end training of a neural controller, or to use reinforcement learning to make hardchoices about which operation to perform. While these results are impressive, these approacheshave a number of important limitations: (a) they are computationally expensive and hard to train, (b)a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verifythe correctness of the learnt mapping (as it is defined by a neural network). While some recentlyproposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunel et al., 2016)do learn interpretable programs, they still need to learn a separate neural network model for eachindividual task.Motivated by the need for model interpretability and scalability to multiple tasks, we address theproblem of Program Synthesis . Program Synthesis, the problem of automatically constructing pro-grams that are consistent with a given specification, has long been a subject of research in ComputerScience (Biermann, 1978; Summers, 1977). This interest has been reinvigorated in recent years on1Published as a conference paper at ICLR 2017the back of the development of methods for learning programs in various domains, ranging fromlow-level bit manipulation code (Solar-Lezama et al., 2005) to data structure manipulations (Singh& Solar-Lezama, 2011) and regular expression based string transformations (Gulwani, 2011).Most of the recently proposed methods for program synthesis operate by searching the space ofprograms in a Domain-Specific Language (DSL) instead of arbitrary Turing-complete languages.This hypothesis space of possible programs is huge (potentially infinite) and searching over it is achallenging problem. Several search techniques including enumerative (Udupa et al., 2013), stochas-tic (Schkufza et al., 2013), constraint-based (Solar-Lezama, 2008), and version-space algebra basedalgorithms (Gulwani et al., 2012) have been developed to search over the space of programs in theDSL, which support different kinds of specifications (examples, partial programs, natural languageetc.) and domains. These techniques not only require significant engineering and research effort todevelop carefully-designed heuristics for efficient search, but also have limited applicability and canonly synthesize programs of limited sizes and types.In this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) thatlearns to generate a program incrementally without the need for an explicit search. Once trained,NSPS can automatically construct computer programs that are consistent with any set of input-outputexamples provided at test time. Our method is based on two novel module neural architectures . Thefirst module, called the cross correlation I/O network, produces a continuous representation of anygiven set of input-output examples. The second module, the Recursive-Reverse-Recursive NeuralNetwork (R3NN), given the continuous representation of the input-output examples, synthesizes aprogram by incrementally expanding partial programs. R3NN employs a tree-based neural archi-tecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expandusing rules from a context-free grammar ( i.e., the DSL).We demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-expression-based syntactic string transformations, using a DSL based on the one used by Flash-Fill (Gulwani, 2011; Gulwani et al., 2012), a Programming-By-Example (PBE) system in MicrosoftExcel 2013. Given a few input-output examples of strings, the task is to synthesize a program builton regular expressions to perform the desired string transformation. An example task that can beexpressed in this DSL is shown in Figure 1, which also shows the DSL.Our evaluation shows that NSPS is not only able to construct programs for known tasks from newinput-output examples, but it is also able to construct completely new programs that it had not ob-served during training. Specifically, the proposed system is able to synthesize string transformationprograms for 63% of tasks that it had not observed at training time, and for 94% of tasks when100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238real-world FlashFill benchmarks.To summarize, the key contributions of our work are:A novel Neuro-Symbolic program synthesis technique to encode neural search over thespace of programs defined using a Domain-Specific Language (DSL).The R3NN model that encodes and expands partial programs in the DSL, where each nodehas a global representation of the program tree.A novel cross-correlation based neural architecture for learning continuous representationof sets of input-output examples.Evaluation of the NSPS approach on the complex domain of regular expression based stringtransformations.2 P ROBLEM DEFINITIONIn this section, we formally define the DSL-based program synthesis problem that we consider in thispaper. Given a DSL L, we want to automatically construct a synthesis algorithm Asuch that givena set of input-output example, f(i1;o1);;(in;on)g,Areturns a program P2Lthat conformsto the input-output examples, i.e.,8j: 1jnP(ij) =oj: (1)2Published as a conference paper at ICLR 2017Inputv Output1William Henry Charles Charles, W.2 Michael Johnson Johnson, M.3 Barack Rogers Rogers, B.4 Martha D. Saunders Saunders, M.5 Peter T Gates Gates, P.Stringe:= Concat( f1;;fn)Substringf:= ConstStr( s)jSubStr(v;pl;pr)Positionp:= (r;k;Dir)jConstPos(k)Direction Dir := StartjEndRegexr:=sjT1jTn(a) (b)Figure 1: An example FlashFill task for transforming names to lastname with initials of first name,and (b) The DSL for regular expression based string transformations.The syntax and semantics of the DSL for string transformations is shown in Figure 1(b) and Figure 8respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), andallows for a richer class of substring operations than FlashFill. A DSL program takes as input astringvand returns an output string o. The top-level string expression eis a concatenation of afinite list of substring expressions f1;;fn. A substring expression fcan either be a constantstringsor a substring expression, which is defined using two position logics pl(left) andpr(right).A position logic corresponds to a symbolic expression that evaluates to an index in the string. Aposition logic pcan either be a constant position kor a token match expression (r;k;Dir), whichdenotes the Start orEnd of thekthmatch of token rin input string v. A regex token can either be aconstant string sor one of 8 regular expression tokens: p(ProperCase), C(CAPS),l(lowercase), d(Digits),(Alphabets), n(Alphanumeric),^(StartOfString), and $ (EndOfString). The semanticsof the DSL programs is described in the appendix.A DSL program for the name transformation task shown in Figure 1(a) that is con-sistent with the examples is: Concat (f1;ConstStr(\, ") ;f2;ConstStr(\.") ), wheref1SubStr(v;(\ ";1;End);ConstPos(1))andf2SubStr(v;ConstPos(0) ;ConstPos(1)) . Theprogram concatenates the following 4 strings: i) substring between the end of last whitespace andend of string, ii) constant string “, ”, iii) first character of input string, and iv) constant string “.”.3 O VERVIEW OF OUR APPROACHWe now present an overview of our approach. Given a DSL L, we learn a generative model ofprograms in the DSL Lthat is conditioned on input-output examples to efficiently search for con-sistent programs. The workflow of our system is shown in Figure 2, which is trained end-to-endusing a large training set of programs in the DSL together with their corresponding input-outputexamples. To generate a large training set, we uniformly sample programs from the DSL and thenuse a rule-based strategy to compute well-formed input strings. Given a program P (sampled fromthe DSL), the rule-based strategy generates input strings for the program P ensuring that the pre-conditions of P are met (i.e. P doesn’t throw an exception on the input strings). It collects thepre-conditions of all Substring expressions present in the sampled program P and then generatesinputs conforming to them. For example, let’s assume the sampled program is SubStr (v,(CAPS , 2,Start ), (“ ”, 3, Start )), which extracts the substring between the start of 2ndcapital letter and startof3rdwhitespace. The rule-based strategy would ensure that all the generated input strings consistof at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters.The corresponding output strings are obtained by running the programs on the input strings.A DSL can be considered as a context-free grammar with a start symbol Sand a set of non-terminalswith corresponding expansion rules. The (partial) grammar derivations or trees correspond to (par-tial) programs. A na ̈ıve way to perform a search over the programs in a DSL is to start from the startsymbolSand then randomly choose non-terminals to expand with randomly chosen expansion rulesuntil reaching a derivation with only terminals. We, instead, learn a generative model over partialderivations in the DSL that assigns probabilities to different non-terminals in a partial derivation andcorresponding expansions to guide the search for complete derivations.3Published as a conference paper at ICLR 2017R3NNDSLR3NNI/O EncoderR3NN...DSLDSLProgram SamplerDSLInput Gen Rulesi1–o1i2–o2...ik–ok{p1i1–o1i2–o2...ik–ok{pji1–o1i2–o2...ik–ok{pn...pj,0pj,1pj,2pj...R3NNDSLR3NNI/O EncoderR3NN...DSLDSLLearnt programi1–o1i2–o2...ik–ok(a) Training Phase (b) Test PhaseFigure 2: An overview of the training and test workflow of our synthesis appraoch.Our generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode par-tial trees (derivations) in L, where each node in the partial tree encodes global information aboutevery other node in the tree. The model assigns a vector representation for every symbol and everyexpansion rule in the grammar. Given a partial tree, the model first assigns a vector representationto each leaf node, and then performs a recursive pass going up in the tree to assign a global treerepresentation to the root. It then performs a reverse-recursive pass starting from the root to assigna global tree representation to each node in the tree.The generative process is conditioned on a set of input-output examples to learn a program that isconsistent with this set of examples. We experiment with multiple input-output encoders includingan LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networksfor input and output strings in the examples, and a Cross Correlation encoder that computes the crosscorrelation between the LSTM tensor representations of input and output strings in the examples.This vector is then used as an additional input in the R3NN model to condition the generative model.4 T REE-STRUCTURED GENERATION MODELWe define a program t-steps into construction as a partial program tree (PPT) (see Figure 3 for avisual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule)nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf noderepresents a particular production rule of the DSL, where the number of children of the non-leafnode is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree(PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completedprogram under the DSL and can be executed. We define an expansion as the valid application ofa specific production rule (e !e op2 e) to a specific non-terminal leaf node within a PPT (leafwith symbol e). We refer to the specific production rule that an expansion is derived from as theexpansion type. It can be seen that if there exist two leaf nodes ( l1andl2) with the same symbolthen for every expansion specific to l1there exists an expansion specific to l2with the same type.4.1 R ECURSIVE -REVERSE -RECURSIVE NEURAL NETWORKIn order to define a generation model over PPTs, we need an efficient way of assigning probabilitiesto every valid expansion in the current PPT. A valid expansion has two components: first the pro-duction rule used, and second the position of the expanded leaf node relative to every other node inthe tree. To account for the first component, a separate distributed representation for each produc-tion rule is maintained. The second component is handled using an architecture where the forwardpropagation resembles belief propagation on trees, allowing a notion of global tree state at everynode within the tree. A given expansion probability is then calculated as being proportional to theinner product between the production rule representation and the global-tree representation of theleaf-level non-terminal node. We now describe the design of this architecture in more detail.The R3NN has the following parameters for the grammar described by a DSL (see Figure 3):1. For every symbol s2S, anMdimensional representation (s)2RM.2. For every production rule r2R, anMdimensional representation !(r)2RM.4Published as a conference paper at ICLR 2017(a) Recursive pass (b) Reverse-Recursive passFigure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NNwhere the input is the output of the previous recursive pass.3. For every production rule r2R, a deep neural network frwhich takes as input a vectorx2RQM, withQbeing the number of symbols on the RHS of the production rule r,and outputs a vector y2RM. Therefore, the production-rule network frtakes as input aconcatenation of the distributed representations of each of its RHS symbols and producesa distributed representation for the LHS symbol.4. For every production rule r2R, an additional deep neural network grwhich takes asinput a vector x02RMand outputs a vector y02RQM. We can think of gras a reverseproduction-rule network that takes as input a vector representation of the LHS and producesa concatenation of the distributed representations of each of the rule’s RHS symbols.LetEbe the set of all valid expansions in a PPT T, letLbe the current leaf nodes of TandNbethe current non-leaf (rule) nodes of T. LetS(l)be the symbol of leaf l2LandR(n)represent theproduction rule of non-leaf node n2N.4.1.1 G LOBAL TREE INFORMATION AT THE LEAVESTo compute the probability distribution over the set E, the R3NN first computes a distributed rep-resentation for each leaf node that contains global tree information. To accomplish this, for everyleaf nodel2Lin the tree we retrieve its distributed representation (S(l)). We now do a standardrecursive bottom-to-top, RHS !LHS pass on the network, by going up the tree and applying fR(n)for every non-leaf node n2Non its RHS node representations (see Figure 3(a)). These networksfR(n)produce a node representation which is input into the parent’s rule network and so on until wereach the root node.Once at the root node, we effectively have a fixed-dimensionality global tree representation (root)for the start symbol. The problem is that this representation has lost any notion of tree position. Tosolve this problem R3NN now does what is effectively a reverse-recursive pass which starts at theroot node with (root)as input and moves towards the leaf nodes (see Figure 3(b)).More concretely, we start with the root node representation (root)and use that as input into therule network gR(root)whereR(root)is the production rule that is applied to the start symbol inT. This produces a representation 0(c)for each RHS node cofR(root). Ifcis a non-leaf node,we iteratively apply this procedure to c,i.e., process0(c)usinggR(c)to get representations 0(cc)for every RHS node ccofR(c), etc. Ifcis a leaf node, we now have a leaf representation 0(c)which has an information path to (root)and thus to every other leaf node in the tree. Once thereverse-recursive process is complete, we now have a distributed representation 0(l)for every leafnodelwhich contains global tree information. While (l1)and(l2)could be equal for leaf nodeswhich have the same symbol type, 0(l1)and0(l2)will not be equal even if they have the samesymbol type because they are at different positions in the tree.5Published as a conference paper at ICLR 20174.1.2 E XPANSION PROBABILITIESGiven the global leaf representations 0(l), we can now straightforwardly acquire scores for eache2E. For expansion e, lete:rbe the expansion type (production rule r2Rthateapplies) andlete:lbe the leaf node lthate:ris applied to. ze=0(e:l)!(e:r)The score of an expansion iscalculated using ze=0(e:l)!(e:r). The probability of expansion eis simply the exponentiatednormalized sum over all scores: (e) =ezePe02Eeze0.An additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) toprocess the global leaf representations right before calculating the scores. To do this, we first orderthe global leaf representations sequentially from left-most leaf node to right-mode leaf node. Wethen treat each leaf node as a time step for a BLSTM to process. This provides a sort of skipconnection between leaf nodes, which potentially reduces the path length that information needs totravel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculationrather than the leaves themselves.The R3NN can be seen as an extension and combination of several previous tree-based models,which were mainly developed in the context of natural language processing (Le & Zuidema, 2014;Paulus et al., 2014; Irsoy & Cardie, 2013).5 C ONDITIONING WITH INPUT /OUTPUT EXAMPLESNow that we have defined a generation process over tree-structured programs, we need a way ofconditioning this generation process on a set of input/output examples. The set of input/outputexamples provide a nearly complete specification for the desired output program, and so a goodencoding of the examples is crucial to the success of our program generator. For the most part, thisexample encoding needs to be domain-specific, since different DSLs have different inputs (somemay operate over integers, some over strings, etc.). Therefore, in our case, we use an encodingadapted to the input-output strings that our DSL operates over. We also investigate different ways ofconditioning program search on the learnt example input-output encodings.5.1 E NCODING INPUT /OUTPUT EXAMPLESThere are two types of information that string manipulation programs need to extract from input-output examples: 1) constant strings, such as “ @domain.com ” or “ .”, which appear in all outputexamples; 2) substring indices in input where the index might be further defined by a regular expres-sion. These indices determine which parts of the input are also present in the output. To simplify theDSL, we assume that there is a fixed finite universe of possible constant strings that could appear inprograms. Therefore we focus on extracting the second type of information, the substring indices.In earlier hand-engineered systems such as FlashFill, this information was extracted from the input-output strings by running the Longest Common Substring algorithm, a dynamic programming algo-rithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runsLCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takesthe entire set of substring candidates and simply tries every possible regex and constant index thatcan be used at substring boundaries, exhaustively searching for the one which is the most “general”,where generality is specified by hand-engineered heuristics.In contrast to these previous methods, instead of hand-designing a complicated algorithm to extractregex-based substrings, we develop neural network based architectures that are capable of learning toextract and produce continuous representations of the likely regular expressions given I/O examples.5.1.1 B ASELINE LSTM ENCODEROur first I/O encoding network involves running two separate deep bidirectional LSTM networks forprocessing the input and the output string in each example pair. For each pair, it then concatenatesthe topmost hidden representation at every time step to produce a 4HT-dimensional feature vectorper I/O pair, where Tis the maximum string length for any input or output string, and His thetopmost LSTM hidden dimension.6Published as a conference paper at ICLR 2017We then concatenate the encoding vectors across all I/O pairs to get a vector representation of the en-tire I/O set. This encoding is conceptually straightforward and has very little prior knowledge aboutwhat operations are being performed over the strings, i.e., substring, constant, etc., which mightmake it difficult to discover substring indices, especially the ones based on regular expressions.5.1.2 C ROSS CORRELATION ENCODERTo help the model discover input substrings that are copied to the output, we designed an novel I/Oexample encoder to compute the cross correlation between each input and output example repre-sentation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to thisencoder. For each example pair, we first slide the output feature block over the input feature blockand compute the dot product between the respective position representation. Then, we sum over alloverlapping time steps. Features of all pairs are then concatenated to form a 2(T1)-dimensionalvector encoding for all example pairs. There are 2(T1)possible alignments in total betweeninput and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure 9.We also designed the following variants of this encoder.Diffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encoderexcept that instead of summing over overlapping time steps after the element-wise dot product, wesimply concatenate the vectors corresponding to all time steps, resulting in a final representation thatcontains 2(T1)Tfeatures for each example pair.LSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, insteadof doing an element-wise dot product, we run a bidirectional LSTM over the concatenated featureblocks of each alignment. We represent each alignment by the LSTM hidden representation of thefinal time step leading to a total of 2H2(T1)features for each example pair.Augmented Diffused Cross Correlation Encoder: For this encoder, the output of each characterposition of the Diffused Cross Correlation encoder is combined with the character embedding at thisposition, then a basic LSTM encoder is run over the combined features to extract a 4H-dimensionalvector for both the input and the output streams. The LSTM encoder output is then concatenatedwith the output of the Diffused Cross Correlation encoder forming a (4H+T(T1))-dimensionalfeature vector for each example pair.5.2 C ONDITIONING PROGRAM SEARCH ON EXAMPLE ENCODINGSOnce the I/O example encodings have been computed, we can use them to perform conditionalgeneration of the program tree using the R3NN model. There are a number of ways in which thePPT generation model can be conditioned using the I/O example encodings depending on where theI/O example information is inserted in the R3NN model. We investigated three locations to injectexample encodings:1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf,and then passed to a conditioning network before the bottom-up recursive pass over the programtree. The conditioning network can be either a multi-layer feedforward network, or a bidirectionalLSTM network running over tree leaves. Running an LSTM over tree leaves allows the model tolearn more about the relative position of each leaf node in the tree.2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to theupdated representation of each tree leaf and then fed to a conditioning network before computingthe expansion scores.3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated tothe example encodings and passed to a conditioning network. The updated root representation isthen used to drive the reverse-recursive pass.Empirically, pre-conditioning worked better than either root- or post- conditioning. In addition,conditioning at all 3 places simultaneously did not cause a significant improvement over justpre-conditioning. Therefore, for the experimental section, we report models which only use pre-conditioning.7Published as a conference paper at ICLR 20176 E XPERIMENTSIn order to evaluate and compare variants of the previously described models, we generate a datasetrandomly from the DSL. To do so, we first enumerate all possible programs under the DSL up toa specific number of instructions, which are then partitioned into training, validation and test sets.In order to have a tractable number of programs, we limited the maximum number of instructionsfor programs to be 13. Length 13 programs are important for this specific DSL because all largerprograms can be written as compositions of sub-programs of length at most 13. The semantics oflength 13 programs therefore constitute the “atoms” of this particular DSL.In testing our model, there are two different categories of generalization. The first is input/outputgeneralization, where we are given a new set of input/output examples as well as a program with aspecific tree that we have seen during training. This represents the model’s capacity to be appliedon new data. The second category is program generalization, where we are given both a previouslyunseen program tree in addition to unseen input/output examples. Therefore the model needs tohave a sufficient enough understanding of the semantics of the DSL that it can construct novelcombinations of operations. For all reported results, training sets correspond to the first type ofgeneralization since we have seen the program tree but not the input/output pairs. Test sets representthe second type of generalization, as they are trees which have not been seen before on input/outputpairs that have also not been seen before.In this section, we compare several different variants of our model. We first evaluate the effect ofeach of the previously described input/output encoders. We then evaluate the R3NN model against asimple recurrent model called io2seq, which is basically an LSTM that takes as input the input/outputconditioning vector and outputs a sequence of DSL symbols that represents a linearized programtree. Finally, we report the results of the best model on the length 13 training and testing sets, aswell as on a set of 238 benchmark functions.6.1 S ETUP AND HYPERPARAMETERS SETTINGSFor training the R3NN, two hyperparameters that were crucial for stabilizing training were the useof hyperbolic tangent activation functions in both R3NN (other activations such as ReLU moreconsistently diverged during our initial experiments) and cross-correlation I/O encoders and the useof minibatches of length 8. Additionally, for all results, the program tree generation is conditionedon a set of 10 input/output string pairs. We used ADAM (Kingma & Ba, 2014) to optimize thenetworks with a learning rate of 0.001. Network weights used the default torch initializations.Due to the difficulty of batching tree-based neural networks since each sample in a batch has apotentially different tree structure, we needed to do batching sequentially. Therefore for each mini-batch of size N, we accumulated the gradients for each sample. After all N sample gradients wereaccumulated, we updated the parameters and reset the accumulated gradients. Due to this sequentialprocessing, in order to train models in a reasonable time, we limited our batch sizes to between8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN,as online learning often caused the network to diverge.For each latent function and set of input/output examples that we test on, we report whether we hada success after sampling 100 functions from the model and testing all 100 to see if one of thesefunctions is equivalent to the latent function. Here we consider two functions to be equivalent withrespect to a specific input/output example set if the functions output the same strings when run onthe inputs. Under this definition, two functions can have a different set of operations but still beequivalent with respect to a specific input-output set.We restricted the maximum size of training programs to be 13 because of two computational consid-erations. As described earlier, one difficulty was in batching tree-based neural networks of differentstructure and the computational cost of batching increases with the increase in size of the programtrees. The second issue is that valid I/O strings for programs often grow with the program length,in the sense that for programs of length 40 a minimal valid I/O string will typically be much longerthan a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat(ConstStr \longstring") (Concat (ConstStr \longstring") (Concat (ConstStr \longstring")...))) , the valid output string would be \longstringlongstringlongstring..." which could be many8Published as a conference paper at ICLR 2017I/O Encoding Train TestLSTM 88% 88%Cross Correlation (CC) 67% 65%Diffused CC 89% 88%LSTM-sum CC 90% 91%Augmented diffused CC 91% 91%Table 1: The effect of different input/output encoders on accuracy. Each result used 100 samples.There is almost no generalization error in the results.Sampling Train Testio2seq 44% 42%Table 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples.hundreds of characters long. Because of limited GPU memory, the I/O encoder models can quicklyrun out of memory.6.2 E XAMPLE ENCODINGIn this section, we evaluate the effect of several different input/output example encoders. To controlfor the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generatethe program tree. Table 1 shows the performance of several of these input/output example encoders.We can see that the summed cross-correlation encoder did not perform well, which can be due tothe fact that the sum destroys positional information that might be useful for determining specificsubstring indices. The LSTM-sum and the augmented diffused cross-correlation models did thebest. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs withouthaving any prior knowledge explicitly built into the architecture. We use 100 samples for evaluatingthe Train and Test sets. The training performance is sometimes slightly lower because there areclose to 5 million training programs but we only look at less than 2 million of these programs duringtraining. We sample a subset of only 1000 training programs from the 5 million program set toreport the training results in the tables. The test sets also consist of 1000 programs.6.3 IO2SEQIn this section, we motivate the use of the R3NN by testing whether a simpler model can also beused to generate programs. The io2seq model is an LSTM whose initial hidden and cell statesare a function of the input/output encoding vector. The io2seq model then generates a linearizedtree of a program symbol-by-symbol. An example of what a linearized program tree looks like is(S(e(f(ConstStr \@") ConstStr )f)e)S, which represents the program tree that returns the constantstring “@”. Predicting a linearized tree using an LSTM was also done in the context of pars-ing (Vinyals et al., 2015). For the io2seq model, we used the LSTM-sum cross-correlation I/Oconditioning model.The results in Table 2 show that the performance of the io2seq model at 100 samples per latent testfunction is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for thatcould be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seqmodel has to predict the parentheses symbols that determine at which level of the tree a particularsymbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13programs, while the R3NN requires no more than 13.6.4 E FFECT OF SAMPLING MULTIPLE PROGRAMSFor the best R3NN model that we trained, we also evaluated the effect that a different number ofsamples per latent function had on performance. The results are shown in Table 3. The increase ofthe model’s performance as the sample size increases hints that the model has a notion of what typeof program satisfies a given I/O pair, but it might not be that certain about the details such as whichregex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets.9Published as a conference paper at ICLR 2017Sampling Train Test1-best 60% 63%1-sample 56% 57%10-sample 81% 79%50-sample 91% 89%100-sample 94% 94%300-sample 97% 97%Table 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosingthe expansion with highest probability at each step.303540455055601 2 3 4 5 6 7 8 9 10AccuracyNumber of I/O Examples to train the EncoderModel accuracy with increasing I/O examplesTrain TestFigure 4: The train and test accuracies for models trained with different number of input-outputexamples.6.5 E FFECT OF NUMBER OF INPUT -OUTPUT EXAMPLESWe evaluate the effect of varying the number of input-output examples used to train the Input-outputencoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown inFigure 4. As expected, the accuracy increases with increase in number of input-output examples,since more examples add more information to the encoder and constrain the space of consistentprograms in the DSL.6.6 F LASH FILLBENCHMARKSWe also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Mi-crosoft Excel team and online help-forums. These benchmarks involve string manipulation tasksdescribed using input-output examples. We evaluate two models – one with a cross correlation en-coder trained on 5 input-output examples and another trained on 10 input-output examples. Boththe models were trained on randomly sampled programs from the DSL upto size 13 with randomlygenerated input-output examples.The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shownin Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks forwhich our model was able to learn the program using 5 input-output examples using samples oftop-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Sincethe model was trained for programs upto size 13, it is not surprising that it is not able to solve tasksthat need larger program size. There are 110 FlashFill benchmarks that require programs upto size13, out of which the model is able to solve 82.7% of them.The effect of sampling multiple learnt programs instead of only top program is shown in Figure 5(b).With only 10 samples, the model can already learn about 13% of the benchmarks. We observea steady increase in performance upto about 2000 samples, after which we do not observe anysignificant improvement. Since there are more than 2 million programs in the DSL of length 11itself, the enumerative techniques with uniform search do not scale well (Alur et al., 2015).We also evaluate a model that is learnt with 10 input-output examples per benchmark. This modelcan only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarkscontained only 5 input-output examples for each task, to run the model that took 10 examples asinput, we duplicated the I/O examples. Our models are trained on the synthetic training dataset10Published as a conference paper at ICLR 2017051015202530354045504 7 9 10 11 13 15 17 19 24 25 27 30 31 37 50 59 63Number of BenchmarksSize of smallest programs for FlashFill BenchmarksNumber of FlashFill Benchmarks solvedTotal SolvedSampling Solved Benchmarks10 13%50 21%100 23%200 29%500 33%1000 34%2000 38%5000 38%(a) (b)Figure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the perfor-mance of our model, (b) The effect of sampling for trying top-k learnt programs.Inputv Output[CPT-00350 [CPT-00350][CPT-00340] [CPT-00340][CPT-114563] [CPT-114563][CPT-1AB02 [CPT-1AB02][CPT-00360 [CPT-00360]Inputv Output732606129 0x73430257526 0x43444004480 0x44371255254 0x37635272676 0x63Inputv OutputJohn Doyle John D.Matt Walters Matt W.Jody Foster Jody F.Angela Lindsay Angela L.Maria Schulte Maria S.(a) (b) (c)Figure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets,(b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial.that is generated uniformly from the DSL. Because of the discrepancy between the training datadistribution (uniform) and auxiliary task data distribution, the model with 10 input/output examplesmight not perform the best on the FlashFill benchmark distribution, even though it performs betteron the synthetic data distribution (on which it is trained) as shown in Figure 4.Our model is able to solve majority of FlashFill benchmarks that require learning programs withupto 3 Concat operations. We now describe a few of these benchmarks, also shown in Fig-ure 6. An Excel user wanted to clean a set of medical billing records by adding a missing “]”to medical codes as shown in Figure 6(a). Our system learns the following program given these5 input-output examples: Concat (SubStr (v,ConstPos (0),(d,-1,End)),ConstStr (“]”)). The pro-gram concatenates the substring between the start of the input string and the position of the lastdigit regular expression with the constant string “]”. Another task that required user to trans-form some numbers into a hex format is shown in Figure 6(b). Our system learns the followingprogram: Concat (ConstStr (“0x”), SubStr (v,ConstPos (0),ConstPos(2))). For some benchmarkswith long input strings, it is still able to learn regular expressions to extract the desired sub-string, e.g. it learns a program to extract “NancyF” from the string “123456789,freehafer ,drew,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999”.Our system is currently not able to learn programs for benchmarks that require 4 or more Con-catoperations. Two such benchmarks are shown in Figure 7. The task of combining names inFigure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Fig-ure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in trainingwith programs of larger size. There are also a few interesting benchmarks where the R3NN modelsgets very close to learning the desired program. For example, for the task “ Bill Gates ”!“Mr.Bill Gates ”, it learns a program that generates “ Mr.Bill Gates ” (without the whitespace), and forthe task “617-444-5454” !“(617) 444-5454”, it learns a program that generates the string “(617444-5454”.11Published as a conference paper at ICLR 2017Inputv Output1 John James Paul John, James, and Paul.2 Tom Mike Bill Tom, Mike, and Bill.3 Marie Nina John Marie, Nina, and John.4Reggie Anna Adam Reggie, Anna, and Adam.Inputv Output1(425) 221 6767 425-221-67672 206.225.1298 206-225-12983 617-224-9874 617-224-98744 425.118.9281 425-118-9281(a) (b)Figure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transform-ing phone numbers to consistent format.7 R ELATED WORKWe have seen a renewed interest in recent years in the area of Program Induction and Synthesis.In the machine learning community, a number of promising neural architectures have been pro-posed to perform program induction . These methods have employed architectures inspired fromcomputation modules (Turing Machines, RAM) (Graves et al., 2014; Kurach et al., 2015; Reed &de Freitas, 2015; Neelakantan et al., 2015) or common data structures such as stacks used in manyalgorithms (Joulin & Mikolov, 2015). These approaches represent the atomic operations of the net-work in a differentiable form, which allows for efficient end-to-end training of a neural controller.However, unlike our approach that learns comprehensible complete programs, many of these ap-proaches learn only the program behavior ( i.e., they produce desired outputs on new input data).Some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunelet al., 2016) do learn interpretable programs but these techniques require learning a separate neuralnetwork model for each individual task, which is undesirable in many synthesis settings where wewould like to learn programs in real-time for a large number of tasks. Liang et al. (2010) restrictthe problem space with a probabilistic context-free grammar and introduce a new representationof programs based on combinatory logic, which allows for sharing sub-programs across multipletasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructuresof programs. Our approach, instead, uses neural architectures to condition the search space of pro-grams, and does not require additional step of representing program space using combinatory logicfor allowing sharing.The DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al.,2015). It has been used for many applications including synthesizing low-level bitvector implemen-tations (Solar-Lezama et al., 2005), Excel macros for data manipulation (Gulwani, 2011; Gulwaniet al., 2012), superoptimization by finding smaller equivalent loop bodies (Schkufza et al., 2013),protocol synthesis from scenarios (Udupa et al., 2013), synthesis of loop-free programs (Gulwaniet al., 2011), and automated feedback generation for programming assignments (Singh et al., 2013).The synthesis techniques proposed in the literature generally employ various search techniques in-cluding enumeration with pruning, symbolic constraint solving, and stochastic search, while sup-porting different forms of specifications including input-output examples, partial programs, programinvariants, and reference implementation.In this paper, we consider input-output example based specification over the hypothesis space de-fined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gul-wani, 2011). The key difference between our approach over previous techniques is that our systemis trained completely in an end-to-end fashion, while previous techniques require significant manualeffort to design heuristics for efficient search. There is some work on guiding the program search us-ing learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textualfeatures of examples (Menon et al., 2013). Moreover, their DSL consists of composition of about100 high-level text transformation functions such as count anddedup , whereas our DSL consists oftree structured programs over richer regular expression based substring constructs.There is also a recent line of work on learning probabilistic models of code from a large number ofcode repositories ( big code ) (Raychev et al., 2015; Bielik et al., 2016; Hindle et al., 2016), whichare then used for applications such as auto-completion of partial programs, inference of variableand method names, program repair, etc. These language models typically capture only the syntactic12Published as a conference paper at ICLR 2017properties of code, unlike our approach that also tries to capture the semantics to learn the desiredprogram. The work by Maddison & Tarlow (2014) addresses the problem of learning structuredgenerative models of source code but both their model and application domain are different fromours. Piech et al. (2015) use an NPM-RNN model to embed program ASTs, where a subtree ofthe AST rooted at a node n is represented by a matrix obtained by combining representations ofthe children of node n and the embedding matrix of the node n itself (which corresponds to itsfunctional behavior). The forward pass in our R3NN architecture from leaf nodes to the root nodeis, at a high-level, similar, but we use a distributed representation for each grammar symbol thatleads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass toensure all nodes in the tree encode global information about other nodes in the tree. Finally, theR3NN network is then used to incrementally build a tree to synthesize a program.The R3NN model employed in our work is related to several tree and graph structured neural net-works present in the NLP literature (Le & Zuidema, 2014; Paulus et al., 2014; Irsoy & Cardie, 2013).The Inside-Outside Recursive Neural Network (Le & Zuidema, 2014) in particular is most similar tothe R3NN, where they generate a parse tree incrementally by using global leaf-level representationsto determine which expansions in the parse tree to take next.8 C ONCLUSIONWe have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to con-struct a program incrementally based on given input-output examples. To do so, a new neuralarchitecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand apartial program tree into a full program tree. Its effectiveness at example-based program synthesisis demonstrated, even when the program has not been seen during training.These promising results open up a number of interesting directions for future research. For example,we took a supervised-learning approach here, assuming availability of target programs during train-ing. In some scenarios, we may only have access to an oracle that returns the desired output givenan input. In this case, reinforcement learning is a promising framework for program synthesis.REFERENCESAlur, Rajeev, Bod ́ık, Rastislav, Dallal, Eric, Fisman, Dana, Garg, Pranav, Juniwal, Garvit, Kress-Gazit, Hadas, Madhusudan, P., Martin, Milo M. K., Raghothaman, Mukund, Saha, Shamwaditya,Seshia, Sanjit A., Singh, Rishabh, Solar-Lezama, Armando, Torlak, Emina, and Udupa, Ab-hishek. Syntax-guided synthesis. In Dependable Software Systems Engineering , pp. 1–25. 2015.Bielik, Pavol, Raychev, Veselin, and Vechev, Martin T. PHOG: probabilistic model for code. InICML , pp. 2933–2942, 2016.Biermann, Alan W. The inference of regular lisp programs from examples. IEEE transactions onSystems, Man, and Cybernetics , 8(8):585–600, 1978.Bunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan. Adap-tive neural compilation. CoRR , abs/1605.07969, 2016. URL http://arxiv.org/abs/1605.07969 .Gaunt, Alexander L, Brockschmidt, Marc, Singh, Rishabh, Kushman, Nate, Kohli, Pushmeet, Tay-lor, Jonathan, and Tarlow, Daniel. Terpret: A probabilistic programming language for programinduction. arXiv preprint arXiv:1608.04428 , 2016.Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Gulwani, Sumit. Automating string processing in spreadsheets using input-output examples. InPOPL , pp. 317–330, 2011.Gulwani, Sumit, Jha, Susmit, Tiwari, Ashish, and Venkatesan, Ramarathnam. Synthesis of loop-freeprograms. In PLDI , pp. 62–73, 2011.Gulwani, Sumit, Harris, William, and Singh, Rishabh. Spreadsheet data manipulation using exam-ples. Communications of the ACM , Aug 2012.13Published as a conference paper at ICLR 2017Hindle, Abram, Barr, Earl T., Gabel, Mark, Su, Zhendong, and Devanbu, Premkumar T. On thenaturalness of software. Commun. ACM , 59(5):122–131, 2016.Irsoy, Orzan and Cardie, Claire. Bidirectional recursive neural networks for token-level labelingwith structure. In NIPS Deep Learning Workshop , 2013.Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrentnets. In NIPS , pp. 190–198, 2015.Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. In ICLR , 2014.Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXivpreprint arXiv:1511.06392 , 2015.Le, Phong and Zuidema, Willem. The inside-outside recursive neural network model for dependencyparsing. In EMNLP , pp. 729–739, 2014.Liang, Percy, Jordan, Michael I., and Klein, Dan. Learning programs: A hierarchical Bayesianapproach. In ICML , pp. 639–646, 2010.Maddison, Chris J and Tarlow, Daniel. Structured generative models of natural source code. InICML , pp. 649–657, 2014.Menon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler W., and Kalai, Adam. Amachine learning framework for programming by example. In ICML , pp. 187–195, 2013.Neelakantan, Arvind, Le, Quoc V , and Sutskever, Ilya. Neural programmer: Inducing latent pro-grams with gradient descent. arXiv preprint arXiv:1511.04834 , 2015.Paulus, Romain, Socher, Richard, and Manning, Christopher D. Global belief recursive neuralnetworks. pp. 2888–2896, 2014.Piech, Chris, Huang, Jonathan, Nguyen, Andy, Phulsuksombati, Mike, Sahami, Mehran, andGuibas, Leonidas J. Learning program embeddings to propagate feedback on student code. InICML , pp. 1093–1102, 2015.Raychev, Veselin, Vechev, Martin T., and Krause, Andreas. Predicting program properties from ”bigcode”. In POPL , pp. 111–124, 2015.Reed, Scott and de Freitas, Nando. Neural programmer-interpreters. arXiv preprintarXiv:1511.06279 , 2015.Riedel, Sebastian, Bosnjak, Matko, and Rockt ̈aschel, Tim. Programming with a differentiable forthinterpreter. CoRR , abs/1605.06640, 2016. URL http://arxiv.org/abs/1605.06640 .Schkufza, Eric, Sharma, Rahul, and Aiken, Alex. Stochastic superoptimization. In ASPLOS , pp.305–316, 2013.Singh, Rishabh and Solar-Lezama, Armando. Synthesizing data structure manipulations from sto-ryboards. In SIGSOFT FSE , pp. 289–299, 2011.Singh, Rishabh, Gulwani, Sumit, and Solar-Lezama, Armando. Automated feedback generation forintroductory programming assignments. In PLDI , pp. 15–26, 2013.Solar-Lezama, Armando. Program Synthesis By Sketching . PhD thesis, EECS Dept., UC Berkeley,2008.Solar-Lezama, Armando, Rabbah, Rodric, Bodik, Rastislav, and Ebcioglu, Kemal. Programming bysketching for bit-streaming programs. In PLDI , 2005.Summers, Phillip D. A methodology for lisp program construction from examples. Journal of theACM (JACM) , 24(1):161–175, 1977.Udupa, Abhishek, Raghavan, Arun, Deshmukh, Jyotirmoy V ., Mador-Haim, Sela, Martin, MiloM. K., and Alur, Rajeev. TRANSIT: specifying protocols with concolic snippets. In PLDI , pp.287–296, 2013.14Published as a conference paper at ICLR 2017JConcat(f1;;fn)Kv= Concat( Jf1Kv;;JfnKv)JConstStr(s)Kv=sJSubStr(v;pl;pr)Kv=v[JplKv::JprKv]JConstPos(k)Kv=k>0?k: len(s) +kJ(r;k;Start) Kv=Start ofkthmatch of r in vfrom beginning (end if k<0)J(r;k;End) Kv=End ofkthmatch of r in vfrom beginning (end if k<0)Figure 8: The semantics of the DSL for string transformations.Figure 9: The cross correlation encoder to encode a single input-output example.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey.Grammar as a foreign language. In ICLR , 2015.A D OMAIN -SPECIFIC LANGUAGE FOR STRING TRANSFORMATIONSThe semantics of the DSL programs is shown in Figure 8. The semantics of a Concat expressionis to concatenate the results of recursively evaluating the constituent substring expressions fi. Thesemantics of ConstStr(s) is to simply return the constant string s. The semantics of a substringexpression is to first evaluate the two position logics plandprtop1andp2respectively, and thenreturn the substring corresponding to v[p1::p2]. We denote s[i::j]to denote the substring of stringsstarting at index i (inclusive) and ending at index j (exclusive), and len(s) denotes its length.The semantics of ConstPos(k) expression is to return kifk > 0or return len +k(ifk < 0).The semantics of position logic (r;k;Start) is to return the Start of kthmatch of r in vfrom thebeginning (if k>0) or from the end (if k<0).15
By2FhZOSe
rJ0JwFcex
ICLR.cc/2017/conference/-/paper498/official/review
{"title": "Review", "rating": "5: Marginally below acceptance threshold", "review": "The paper presents a method to synthesize string manipulation programs based on a set of input output pairs. The paper focuses on a restricted class of programs based on a simple context free grammar sufficient to solve string manipulation tasks from the FlashFill benchmark. A probabilistic generative model called Recursive-Reverse-Recursive Neural Network (R3NN) is presented that assigns a probability to each program's parse tree after a bottom-up and a top-down pass. Results are presented on a synthetic dataset and a Microsoft Excel benchmark called FlashFill.\n\nThe problem of program synthesis is important with a lot of recent interest from the deep learning community. The approach taken in the paper based on parse trees and recursive neural networks seems interesting and promising. However, the model seems too complicated and unclear at several places (details below). On the negative side, the experiments are particularly weak, and the paper does not seem ready for publication based on its experimental results. I was positive about the paper until I realized that the method obtains an accuracy of 38% on FlashFill benchmark when presented with only 5 input-output examples but the performance degrades to 29% when 10 input-output examples are used. This was surprising to the authors too, and they came up with some hypothesis to explain this phenomenon. To me, this is a big problem indicating either a bug in the code or a severe shortcoming of the model. Any model useful for program synthesis needs to be applicable to many input-output examples because most complicated programs require many examples to disambiguate the details of the program.\n\nGiven the shortcoming of the experiments, I am not convinced that the paper is ready for publication. Thus, I recommend weak reject. I encourage the authors to address the comments below and resubmit as the general idea seems promising.\n\nMore comments:\n\nI am unclear about the model at several places:\n- How is the probability distribution normalized? Given the nature of bottom-up top-down evaluation of the potentials, should one enumerate over different completions of a program and the compare their exponentiated potentials? If so, does this restrict the applicability of the model to long programs as the enumeration of the completions gets prohibitively slow?\n- What if you only use 1 input-output pair for each program instead of 5? Do the results get better?\n- Section 5.1.2 is not clear to me. Can you elaborate by potentially including some examples? Does your input-output representation pre-supposes a fixed number of input-output examples across tasks (e.g. 5 or 10 for all of the tasks)?\n\nRegarding the experiments,\n- Could you present some baseline results on FlashFill benchmark based on previous work?\n- Is your method only applicable to short programs? (based on the choice of 13 for the number of instructions)\n- Does a program considered correct when it is identical to a test program, or is it considered correct when it succeeds on a set of held-out input-output pairs?\n- When using 100 or more program samples, do you report the accuracy of the best program out of 100 (i.e. recall) or do you first filter the programs based on training input-output pairs and then evaluate a program that is selected?\n\nYour paper is well beyond the recommended limit of 8 pages. please consider making it shorter.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Neuro-Symbolic Program Synthesis
["Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli"]
Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training.
["Deep learning", "Structured prediction"]
https://openreview.net/forum?id=rJ0JwFcex
https://openreview.net/pdf?id=rJ0JwFcex
https://openreview.net/forum?id=rJ0JwFcex&noteId=By2FhZOSe
Published as a conference paper at ICLR 2017NEURO -SYMBOLIC PROGRAM SYNTHESISEmilio Parisotto1;2, Abdel-rahman Mohamed1, Rishabh Singh1,Lihong Li1, Dengyong Zhou1, Pushmeet Kohli11Microsoft Research, USA2Carnegie Mellon University, USAeparisot@andrew.cmu.edu , fasamir,risin,lihongli,denzho,pkohli g@microsoft.comABSTRACTRecent years have seen the proposal of a number of neural architectures for theproblem of Program Induction. Given a set of input-output examples, these ar-chitectures are able to learn mappings that generalize to new test inputs. Whileachieving impressive results, these approaches have a number of important limi-tations: (a) they are computationally expensive and hard to train, (b) a model hasto be trained for each task (program) separately, and (c) it is hard to interpret orverify the correctness of the learnt mapping (as it is defined by a neural network).In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis ,to overcome the above-mentioned problems. Once trained, our approach can au-tomatically construct computer programs in a domain-specific language that areconsistent with a set of input-output examples provided at test time. Our methodis based on two novel neural modules. The first module, called the cross corre-lation I/O network, given a set of input-output examples, produces a continuousrepresentation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representationof the examples, synthesizes a program by incrementally expanding partial pro-grams. We demonstrate the effectiveness of our approach by applying it to therich and complex domain of regular expression based string transformations. Ex-periments show that the R3NN model is not only able to construct programs fromnew input-output examples, but it is also able to construct new programs for tasksthat it had never observed before during training.1 I NTRODUCTIONThe act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demon-stration of the reasoning abilities of the human mind. Expectedly, Program Induction is consideredas one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progresson deep learning has led to the proposal of a number of promising neural architectures for this prob-lem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graveset al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or commondata structures used in many algorithms (stack) (Joulin & Mikolov, 2015). A common thread in thisline of work is to specify the atomic operations of the network in some differentiable form, allowingefficient end-to-end training of a neural controller, or to use reinforcement learning to make hardchoices about which operation to perform. While these results are impressive, these approacheshave a number of important limitations: (a) they are computationally expensive and hard to train, (b)a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verifythe correctness of the learnt mapping (as it is defined by a neural network). While some recentlyproposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunel et al., 2016)do learn interpretable programs, they still need to learn a separate neural network model for eachindividual task.Motivated by the need for model interpretability and scalability to multiple tasks, we address theproblem of Program Synthesis . Program Synthesis, the problem of automatically constructing pro-grams that are consistent with a given specification, has long been a subject of research in ComputerScience (Biermann, 1978; Summers, 1977). This interest has been reinvigorated in recent years on1Published as a conference paper at ICLR 2017the back of the development of methods for learning programs in various domains, ranging fromlow-level bit manipulation code (Solar-Lezama et al., 2005) to data structure manipulations (Singh& Solar-Lezama, 2011) and regular expression based string transformations (Gulwani, 2011).Most of the recently proposed methods for program synthesis operate by searching the space ofprograms in a Domain-Specific Language (DSL) instead of arbitrary Turing-complete languages.This hypothesis space of possible programs is huge (potentially infinite) and searching over it is achallenging problem. Several search techniques including enumerative (Udupa et al., 2013), stochas-tic (Schkufza et al., 2013), constraint-based (Solar-Lezama, 2008), and version-space algebra basedalgorithms (Gulwani et al., 2012) have been developed to search over the space of programs in theDSL, which support different kinds of specifications (examples, partial programs, natural languageetc.) and domains. These techniques not only require significant engineering and research effort todevelop carefully-designed heuristics for efficient search, but also have limited applicability and canonly synthesize programs of limited sizes and types.In this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) thatlearns to generate a program incrementally without the need for an explicit search. Once trained,NSPS can automatically construct computer programs that are consistent with any set of input-outputexamples provided at test time. Our method is based on two novel module neural architectures . Thefirst module, called the cross correlation I/O network, produces a continuous representation of anygiven set of input-output examples. The second module, the Recursive-Reverse-Recursive NeuralNetwork (R3NN), given the continuous representation of the input-output examples, synthesizes aprogram by incrementally expanding partial programs. R3NN employs a tree-based neural archi-tecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expandusing rules from a context-free grammar ( i.e., the DSL).We demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-expression-based syntactic string transformations, using a DSL based on the one used by Flash-Fill (Gulwani, 2011; Gulwani et al., 2012), a Programming-By-Example (PBE) system in MicrosoftExcel 2013. Given a few input-output examples of strings, the task is to synthesize a program builton regular expressions to perform the desired string transformation. An example task that can beexpressed in this DSL is shown in Figure 1, which also shows the DSL.Our evaluation shows that NSPS is not only able to construct programs for known tasks from newinput-output examples, but it is also able to construct completely new programs that it had not ob-served during training. Specifically, the proposed system is able to synthesize string transformationprograms for 63% of tasks that it had not observed at training time, and for 94% of tasks when100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238real-world FlashFill benchmarks.To summarize, the key contributions of our work are:A novel Neuro-Symbolic program synthesis technique to encode neural search over thespace of programs defined using a Domain-Specific Language (DSL).The R3NN model that encodes and expands partial programs in the DSL, where each nodehas a global representation of the program tree.A novel cross-correlation based neural architecture for learning continuous representationof sets of input-output examples.Evaluation of the NSPS approach on the complex domain of regular expression based stringtransformations.2 P ROBLEM DEFINITIONIn this section, we formally define the DSL-based program synthesis problem that we consider in thispaper. Given a DSL L, we want to automatically construct a synthesis algorithm Asuch that givena set of input-output example, f(i1;o1);;(in;on)g,Areturns a program P2Lthat conformsto the input-output examples, i.e.,8j: 1jnP(ij) =oj: (1)2Published as a conference paper at ICLR 2017Inputv Output1William Henry Charles Charles, W.2 Michael Johnson Johnson, M.3 Barack Rogers Rogers, B.4 Martha D. Saunders Saunders, M.5 Peter T Gates Gates, P.Stringe:= Concat( f1;;fn)Substringf:= ConstStr( s)jSubStr(v;pl;pr)Positionp:= (r;k;Dir)jConstPos(k)Direction Dir := StartjEndRegexr:=sjT1jTn(a) (b)Figure 1: An example FlashFill task for transforming names to lastname with initials of first name,and (b) The DSL for regular expression based string transformations.The syntax and semantics of the DSL for string transformations is shown in Figure 1(b) and Figure 8respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), andallows for a richer class of substring operations than FlashFill. A DSL program takes as input astringvand returns an output string o. The top-level string expression eis a concatenation of afinite list of substring expressions f1;;fn. A substring expression fcan either be a constantstringsor a substring expression, which is defined using two position logics pl(left) andpr(right).A position logic corresponds to a symbolic expression that evaluates to an index in the string. Aposition logic pcan either be a constant position kor a token match expression (r;k;Dir), whichdenotes the Start orEnd of thekthmatch of token rin input string v. A regex token can either be aconstant string sor one of 8 regular expression tokens: p(ProperCase), C(CAPS),l(lowercase), d(Digits),(Alphabets), n(Alphanumeric),^(StartOfString), and $ (EndOfString). The semanticsof the DSL programs is described in the appendix.A DSL program for the name transformation task shown in Figure 1(a) that is con-sistent with the examples is: Concat (f1;ConstStr(\, ") ;f2;ConstStr(\.") ), wheref1SubStr(v;(\ ";1;End);ConstPos(1))andf2SubStr(v;ConstPos(0) ;ConstPos(1)) . Theprogram concatenates the following 4 strings: i) substring between the end of last whitespace andend of string, ii) constant string “, ”, iii) first character of input string, and iv) constant string “.”.3 O VERVIEW OF OUR APPROACHWe now present an overview of our approach. Given a DSL L, we learn a generative model ofprograms in the DSL Lthat is conditioned on input-output examples to efficiently search for con-sistent programs. The workflow of our system is shown in Figure 2, which is trained end-to-endusing a large training set of programs in the DSL together with their corresponding input-outputexamples. To generate a large training set, we uniformly sample programs from the DSL and thenuse a rule-based strategy to compute well-formed input strings. Given a program P (sampled fromthe DSL), the rule-based strategy generates input strings for the program P ensuring that the pre-conditions of P are met (i.e. P doesn’t throw an exception on the input strings). It collects thepre-conditions of all Substring expressions present in the sampled program P and then generatesinputs conforming to them. For example, let’s assume the sampled program is SubStr (v,(CAPS , 2,Start ), (“ ”, 3, Start )), which extracts the substring between the start of 2ndcapital letter and startof3rdwhitespace. The rule-based strategy would ensure that all the generated input strings consistof at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters.The corresponding output strings are obtained by running the programs on the input strings.A DSL can be considered as a context-free grammar with a start symbol Sand a set of non-terminalswith corresponding expansion rules. The (partial) grammar derivations or trees correspond to (par-tial) programs. A na ̈ıve way to perform a search over the programs in a DSL is to start from the startsymbolSand then randomly choose non-terminals to expand with randomly chosen expansion rulesuntil reaching a derivation with only terminals. We, instead, learn a generative model over partialderivations in the DSL that assigns probabilities to different non-terminals in a partial derivation andcorresponding expansions to guide the search for complete derivations.3Published as a conference paper at ICLR 2017R3NNDSLR3NNI/O EncoderR3NN...DSLDSLProgram SamplerDSLInput Gen Rulesi1–o1i2–o2...ik–ok{p1i1–o1i2–o2...ik–ok{pji1–o1i2–o2...ik–ok{pn...pj,0pj,1pj,2pj...R3NNDSLR3NNI/O EncoderR3NN...DSLDSLLearnt programi1–o1i2–o2...ik–ok(a) Training Phase (b) Test PhaseFigure 2: An overview of the training and test workflow of our synthesis appraoch.Our generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode par-tial trees (derivations) in L, where each node in the partial tree encodes global information aboutevery other node in the tree. The model assigns a vector representation for every symbol and everyexpansion rule in the grammar. Given a partial tree, the model first assigns a vector representationto each leaf node, and then performs a recursive pass going up in the tree to assign a global treerepresentation to the root. It then performs a reverse-recursive pass starting from the root to assigna global tree representation to each node in the tree.The generative process is conditioned on a set of input-output examples to learn a program that isconsistent with this set of examples. We experiment with multiple input-output encoders includingan LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networksfor input and output strings in the examples, and a Cross Correlation encoder that computes the crosscorrelation between the LSTM tensor representations of input and output strings in the examples.This vector is then used as an additional input in the R3NN model to condition the generative model.4 T REE-STRUCTURED GENERATION MODELWe define a program t-steps into construction as a partial program tree (PPT) (see Figure 3 for avisual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule)nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf noderepresents a particular production rule of the DSL, where the number of children of the non-leafnode is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree(PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completedprogram under the DSL and can be executed. We define an expansion as the valid application ofa specific production rule (e !e op2 e) to a specific non-terminal leaf node within a PPT (leafwith symbol e). We refer to the specific production rule that an expansion is derived from as theexpansion type. It can be seen that if there exist two leaf nodes ( l1andl2) with the same symbolthen for every expansion specific to l1there exists an expansion specific to l2with the same type.4.1 R ECURSIVE -REVERSE -RECURSIVE NEURAL NETWORKIn order to define a generation model over PPTs, we need an efficient way of assigning probabilitiesto every valid expansion in the current PPT. A valid expansion has two components: first the pro-duction rule used, and second the position of the expanded leaf node relative to every other node inthe tree. To account for the first component, a separate distributed representation for each produc-tion rule is maintained. The second component is handled using an architecture where the forwardpropagation resembles belief propagation on trees, allowing a notion of global tree state at everynode within the tree. A given expansion probability is then calculated as being proportional to theinner product between the production rule representation and the global-tree representation of theleaf-level non-terminal node. We now describe the design of this architecture in more detail.The R3NN has the following parameters for the grammar described by a DSL (see Figure 3):1. For every symbol s2S, anMdimensional representation (s)2RM.2. For every production rule r2R, anMdimensional representation !(r)2RM.4Published as a conference paper at ICLR 2017(a) Recursive pass (b) Reverse-Recursive passFigure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NNwhere the input is the output of the previous recursive pass.3. For every production rule r2R, a deep neural network frwhich takes as input a vectorx2RQM, withQbeing the number of symbols on the RHS of the production rule r,and outputs a vector y2RM. Therefore, the production-rule network frtakes as input aconcatenation of the distributed representations of each of its RHS symbols and producesa distributed representation for the LHS symbol.4. For every production rule r2R, an additional deep neural network grwhich takes asinput a vector x02RMand outputs a vector y02RQM. We can think of gras a reverseproduction-rule network that takes as input a vector representation of the LHS and producesa concatenation of the distributed representations of each of the rule’s RHS symbols.LetEbe the set of all valid expansions in a PPT T, letLbe the current leaf nodes of TandNbethe current non-leaf (rule) nodes of T. LetS(l)be the symbol of leaf l2LandR(n)represent theproduction rule of non-leaf node n2N.4.1.1 G LOBAL TREE INFORMATION AT THE LEAVESTo compute the probability distribution over the set E, the R3NN first computes a distributed rep-resentation for each leaf node that contains global tree information. To accomplish this, for everyleaf nodel2Lin the tree we retrieve its distributed representation (S(l)). We now do a standardrecursive bottom-to-top, RHS !LHS pass on the network, by going up the tree and applying fR(n)for every non-leaf node n2Non its RHS node representations (see Figure 3(a)). These networksfR(n)produce a node representation which is input into the parent’s rule network and so on until wereach the root node.Once at the root node, we effectively have a fixed-dimensionality global tree representation (root)for the start symbol. The problem is that this representation has lost any notion of tree position. Tosolve this problem R3NN now does what is effectively a reverse-recursive pass which starts at theroot node with (root)as input and moves towards the leaf nodes (see Figure 3(b)).More concretely, we start with the root node representation (root)and use that as input into therule network gR(root)whereR(root)is the production rule that is applied to the start symbol inT. This produces a representation 0(c)for each RHS node cofR(root). Ifcis a non-leaf node,we iteratively apply this procedure to c,i.e., process0(c)usinggR(c)to get representations 0(cc)for every RHS node ccofR(c), etc. Ifcis a leaf node, we now have a leaf representation 0(c)which has an information path to (root)and thus to every other leaf node in the tree. Once thereverse-recursive process is complete, we now have a distributed representation 0(l)for every leafnodelwhich contains global tree information. While (l1)and(l2)could be equal for leaf nodeswhich have the same symbol type, 0(l1)and0(l2)will not be equal even if they have the samesymbol type because they are at different positions in the tree.5Published as a conference paper at ICLR 20174.1.2 E XPANSION PROBABILITIESGiven the global leaf representations 0(l), we can now straightforwardly acquire scores for eache2E. For expansion e, lete:rbe the expansion type (production rule r2Rthateapplies) andlete:lbe the leaf node lthate:ris applied to. ze=0(e:l)!(e:r)The score of an expansion iscalculated using ze=0(e:l)!(e:r). The probability of expansion eis simply the exponentiatednormalized sum over all scores: (e) =ezePe02Eeze0.An additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) toprocess the global leaf representations right before calculating the scores. To do this, we first orderthe global leaf representations sequentially from left-most leaf node to right-mode leaf node. Wethen treat each leaf node as a time step for a BLSTM to process. This provides a sort of skipconnection between leaf nodes, which potentially reduces the path length that information needs totravel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculationrather than the leaves themselves.The R3NN can be seen as an extension and combination of several previous tree-based models,which were mainly developed in the context of natural language processing (Le & Zuidema, 2014;Paulus et al., 2014; Irsoy & Cardie, 2013).5 C ONDITIONING WITH INPUT /OUTPUT EXAMPLESNow that we have defined a generation process over tree-structured programs, we need a way ofconditioning this generation process on a set of input/output examples. The set of input/outputexamples provide a nearly complete specification for the desired output program, and so a goodencoding of the examples is crucial to the success of our program generator. For the most part, thisexample encoding needs to be domain-specific, since different DSLs have different inputs (somemay operate over integers, some over strings, etc.). Therefore, in our case, we use an encodingadapted to the input-output strings that our DSL operates over. We also investigate different ways ofconditioning program search on the learnt example input-output encodings.5.1 E NCODING INPUT /OUTPUT EXAMPLESThere are two types of information that string manipulation programs need to extract from input-output examples: 1) constant strings, such as “ @domain.com ” or “ .”, which appear in all outputexamples; 2) substring indices in input where the index might be further defined by a regular expres-sion. These indices determine which parts of the input are also present in the output. To simplify theDSL, we assume that there is a fixed finite universe of possible constant strings that could appear inprograms. Therefore we focus on extracting the second type of information, the substring indices.In earlier hand-engineered systems such as FlashFill, this information was extracted from the input-output strings by running the Longest Common Substring algorithm, a dynamic programming algo-rithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runsLCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takesthe entire set of substring candidates and simply tries every possible regex and constant index thatcan be used at substring boundaries, exhaustively searching for the one which is the most “general”,where generality is specified by hand-engineered heuristics.In contrast to these previous methods, instead of hand-designing a complicated algorithm to extractregex-based substrings, we develop neural network based architectures that are capable of learning toextract and produce continuous representations of the likely regular expressions given I/O examples.5.1.1 B ASELINE LSTM ENCODEROur first I/O encoding network involves running two separate deep bidirectional LSTM networks forprocessing the input and the output string in each example pair. For each pair, it then concatenatesthe topmost hidden representation at every time step to produce a 4HT-dimensional feature vectorper I/O pair, where Tis the maximum string length for any input or output string, and His thetopmost LSTM hidden dimension.6Published as a conference paper at ICLR 2017We then concatenate the encoding vectors across all I/O pairs to get a vector representation of the en-tire I/O set. This encoding is conceptually straightforward and has very little prior knowledge aboutwhat operations are being performed over the strings, i.e., substring, constant, etc., which mightmake it difficult to discover substring indices, especially the ones based on regular expressions.5.1.2 C ROSS CORRELATION ENCODERTo help the model discover input substrings that are copied to the output, we designed an novel I/Oexample encoder to compute the cross correlation between each input and output example repre-sentation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to thisencoder. For each example pair, we first slide the output feature block over the input feature blockand compute the dot product between the respective position representation. Then, we sum over alloverlapping time steps. Features of all pairs are then concatenated to form a 2(T1)-dimensionalvector encoding for all example pairs. There are 2(T1)possible alignments in total betweeninput and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure 9.We also designed the following variants of this encoder.Diffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encoderexcept that instead of summing over overlapping time steps after the element-wise dot product, wesimply concatenate the vectors corresponding to all time steps, resulting in a final representation thatcontains 2(T1)Tfeatures for each example pair.LSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, insteadof doing an element-wise dot product, we run a bidirectional LSTM over the concatenated featureblocks of each alignment. We represent each alignment by the LSTM hidden representation of thefinal time step leading to a total of 2H2(T1)features for each example pair.Augmented Diffused Cross Correlation Encoder: For this encoder, the output of each characterposition of the Diffused Cross Correlation encoder is combined with the character embedding at thisposition, then a basic LSTM encoder is run over the combined features to extract a 4H-dimensionalvector for both the input and the output streams. The LSTM encoder output is then concatenatedwith the output of the Diffused Cross Correlation encoder forming a (4H+T(T1))-dimensionalfeature vector for each example pair.5.2 C ONDITIONING PROGRAM SEARCH ON EXAMPLE ENCODINGSOnce the I/O example encodings have been computed, we can use them to perform conditionalgeneration of the program tree using the R3NN model. There are a number of ways in which thePPT generation model can be conditioned using the I/O example encodings depending on where theI/O example information is inserted in the R3NN model. We investigated three locations to injectexample encodings:1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf,and then passed to a conditioning network before the bottom-up recursive pass over the programtree. The conditioning network can be either a multi-layer feedforward network, or a bidirectionalLSTM network running over tree leaves. Running an LSTM over tree leaves allows the model tolearn more about the relative position of each leaf node in the tree.2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to theupdated representation of each tree leaf and then fed to a conditioning network before computingthe expansion scores.3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated tothe example encodings and passed to a conditioning network. The updated root representation isthen used to drive the reverse-recursive pass.Empirically, pre-conditioning worked better than either root- or post- conditioning. In addition,conditioning at all 3 places simultaneously did not cause a significant improvement over justpre-conditioning. Therefore, for the experimental section, we report models which only use pre-conditioning.7Published as a conference paper at ICLR 20176 E XPERIMENTSIn order to evaluate and compare variants of the previously described models, we generate a datasetrandomly from the DSL. To do so, we first enumerate all possible programs under the DSL up toa specific number of instructions, which are then partitioned into training, validation and test sets.In order to have a tractable number of programs, we limited the maximum number of instructionsfor programs to be 13. Length 13 programs are important for this specific DSL because all largerprograms can be written as compositions of sub-programs of length at most 13. The semantics oflength 13 programs therefore constitute the “atoms” of this particular DSL.In testing our model, there are two different categories of generalization. The first is input/outputgeneralization, where we are given a new set of input/output examples as well as a program with aspecific tree that we have seen during training. This represents the model’s capacity to be appliedon new data. The second category is program generalization, where we are given both a previouslyunseen program tree in addition to unseen input/output examples. Therefore the model needs tohave a sufficient enough understanding of the semantics of the DSL that it can construct novelcombinations of operations. For all reported results, training sets correspond to the first type ofgeneralization since we have seen the program tree but not the input/output pairs. Test sets representthe second type of generalization, as they are trees which have not been seen before on input/outputpairs that have also not been seen before.In this section, we compare several different variants of our model. We first evaluate the effect ofeach of the previously described input/output encoders. We then evaluate the R3NN model against asimple recurrent model called io2seq, which is basically an LSTM that takes as input the input/outputconditioning vector and outputs a sequence of DSL symbols that represents a linearized programtree. Finally, we report the results of the best model on the length 13 training and testing sets, aswell as on a set of 238 benchmark functions.6.1 S ETUP AND HYPERPARAMETERS SETTINGSFor training the R3NN, two hyperparameters that were crucial for stabilizing training were the useof hyperbolic tangent activation functions in both R3NN (other activations such as ReLU moreconsistently diverged during our initial experiments) and cross-correlation I/O encoders and the useof minibatches of length 8. Additionally, for all results, the program tree generation is conditionedon a set of 10 input/output string pairs. We used ADAM (Kingma & Ba, 2014) to optimize thenetworks with a learning rate of 0.001. Network weights used the default torch initializations.Due to the difficulty of batching tree-based neural networks since each sample in a batch has apotentially different tree structure, we needed to do batching sequentially. Therefore for each mini-batch of size N, we accumulated the gradients for each sample. After all N sample gradients wereaccumulated, we updated the parameters and reset the accumulated gradients. Due to this sequentialprocessing, in order to train models in a reasonable time, we limited our batch sizes to between8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN,as online learning often caused the network to diverge.For each latent function and set of input/output examples that we test on, we report whether we hada success after sampling 100 functions from the model and testing all 100 to see if one of thesefunctions is equivalent to the latent function. Here we consider two functions to be equivalent withrespect to a specific input/output example set if the functions output the same strings when run onthe inputs. Under this definition, two functions can have a different set of operations but still beequivalent with respect to a specific input-output set.We restricted the maximum size of training programs to be 13 because of two computational consid-erations. As described earlier, one difficulty was in batching tree-based neural networks of differentstructure and the computational cost of batching increases with the increase in size of the programtrees. The second issue is that valid I/O strings for programs often grow with the program length,in the sense that for programs of length 40 a minimal valid I/O string will typically be much longerthan a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat(ConstStr \longstring") (Concat (ConstStr \longstring") (Concat (ConstStr \longstring")...))) , the valid output string would be \longstringlongstringlongstring..." which could be many8Published as a conference paper at ICLR 2017I/O Encoding Train TestLSTM 88% 88%Cross Correlation (CC) 67% 65%Diffused CC 89% 88%LSTM-sum CC 90% 91%Augmented diffused CC 91% 91%Table 1: The effect of different input/output encoders on accuracy. Each result used 100 samples.There is almost no generalization error in the results.Sampling Train Testio2seq 44% 42%Table 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples.hundreds of characters long. Because of limited GPU memory, the I/O encoder models can quicklyrun out of memory.6.2 E XAMPLE ENCODINGIn this section, we evaluate the effect of several different input/output example encoders. To controlfor the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generatethe program tree. Table 1 shows the performance of several of these input/output example encoders.We can see that the summed cross-correlation encoder did not perform well, which can be due tothe fact that the sum destroys positional information that might be useful for determining specificsubstring indices. The LSTM-sum and the augmented diffused cross-correlation models did thebest. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs withouthaving any prior knowledge explicitly built into the architecture. We use 100 samples for evaluatingthe Train and Test sets. The training performance is sometimes slightly lower because there areclose to 5 million training programs but we only look at less than 2 million of these programs duringtraining. We sample a subset of only 1000 training programs from the 5 million program set toreport the training results in the tables. The test sets also consist of 1000 programs.6.3 IO2SEQIn this section, we motivate the use of the R3NN by testing whether a simpler model can also beused to generate programs. The io2seq model is an LSTM whose initial hidden and cell statesare a function of the input/output encoding vector. The io2seq model then generates a linearizedtree of a program symbol-by-symbol. An example of what a linearized program tree looks like is(S(e(f(ConstStr \@") ConstStr )f)e)S, which represents the program tree that returns the constantstring “@”. Predicting a linearized tree using an LSTM was also done in the context of pars-ing (Vinyals et al., 2015). For the io2seq model, we used the LSTM-sum cross-correlation I/Oconditioning model.The results in Table 2 show that the performance of the io2seq model at 100 samples per latent testfunction is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for thatcould be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seqmodel has to predict the parentheses symbols that determine at which level of the tree a particularsymbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13programs, while the R3NN requires no more than 13.6.4 E FFECT OF SAMPLING MULTIPLE PROGRAMSFor the best R3NN model that we trained, we also evaluated the effect that a different number ofsamples per latent function had on performance. The results are shown in Table 3. The increase ofthe model’s performance as the sample size increases hints that the model has a notion of what typeof program satisfies a given I/O pair, but it might not be that certain about the details such as whichregex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets.9Published as a conference paper at ICLR 2017Sampling Train Test1-best 60% 63%1-sample 56% 57%10-sample 81% 79%50-sample 91% 89%100-sample 94% 94%300-sample 97% 97%Table 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosingthe expansion with highest probability at each step.303540455055601 2 3 4 5 6 7 8 9 10AccuracyNumber of I/O Examples to train the EncoderModel accuracy with increasing I/O examplesTrain TestFigure 4: The train and test accuracies for models trained with different number of input-outputexamples.6.5 E FFECT OF NUMBER OF INPUT -OUTPUT EXAMPLESWe evaluate the effect of varying the number of input-output examples used to train the Input-outputencoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown inFigure 4. As expected, the accuracy increases with increase in number of input-output examples,since more examples add more information to the encoder and constrain the space of consistentprograms in the DSL.6.6 F LASH FILLBENCHMARKSWe also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Mi-crosoft Excel team and online help-forums. These benchmarks involve string manipulation tasksdescribed using input-output examples. We evaluate two models – one with a cross correlation en-coder trained on 5 input-output examples and another trained on 10 input-output examples. Boththe models were trained on randomly sampled programs from the DSL upto size 13 with randomlygenerated input-output examples.The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shownin Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks forwhich our model was able to learn the program using 5 input-output examples using samples oftop-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Sincethe model was trained for programs upto size 13, it is not surprising that it is not able to solve tasksthat need larger program size. There are 110 FlashFill benchmarks that require programs upto size13, out of which the model is able to solve 82.7% of them.The effect of sampling multiple learnt programs instead of only top program is shown in Figure 5(b).With only 10 samples, the model can already learn about 13% of the benchmarks. We observea steady increase in performance upto about 2000 samples, after which we do not observe anysignificant improvement. Since there are more than 2 million programs in the DSL of length 11itself, the enumerative techniques with uniform search do not scale well (Alur et al., 2015).We also evaluate a model that is learnt with 10 input-output examples per benchmark. This modelcan only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarkscontained only 5 input-output examples for each task, to run the model that took 10 examples asinput, we duplicated the I/O examples. Our models are trained on the synthetic training dataset10Published as a conference paper at ICLR 2017051015202530354045504 7 9 10 11 13 15 17 19 24 25 27 30 31 37 50 59 63Number of BenchmarksSize of smallest programs for FlashFill BenchmarksNumber of FlashFill Benchmarks solvedTotal SolvedSampling Solved Benchmarks10 13%50 21%100 23%200 29%500 33%1000 34%2000 38%5000 38%(a) (b)Figure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the perfor-mance of our model, (b) The effect of sampling for trying top-k learnt programs.Inputv Output[CPT-00350 [CPT-00350][CPT-00340] [CPT-00340][CPT-114563] [CPT-114563][CPT-1AB02 [CPT-1AB02][CPT-00360 [CPT-00360]Inputv Output732606129 0x73430257526 0x43444004480 0x44371255254 0x37635272676 0x63Inputv OutputJohn Doyle John D.Matt Walters Matt W.Jody Foster Jody F.Angela Lindsay Angela L.Maria Schulte Maria S.(a) (b) (c)Figure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets,(b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial.that is generated uniformly from the DSL. Because of the discrepancy between the training datadistribution (uniform) and auxiliary task data distribution, the model with 10 input/output examplesmight not perform the best on the FlashFill benchmark distribution, even though it performs betteron the synthetic data distribution (on which it is trained) as shown in Figure 4.Our model is able to solve majority of FlashFill benchmarks that require learning programs withupto 3 Concat operations. We now describe a few of these benchmarks, also shown in Fig-ure 6. An Excel user wanted to clean a set of medical billing records by adding a missing “]”to medical codes as shown in Figure 6(a). Our system learns the following program given these5 input-output examples: Concat (SubStr (v,ConstPos (0),(d,-1,End)),ConstStr (“]”)). The pro-gram concatenates the substring between the start of the input string and the position of the lastdigit regular expression with the constant string “]”. Another task that required user to trans-form some numbers into a hex format is shown in Figure 6(b). Our system learns the followingprogram: Concat (ConstStr (“0x”), SubStr (v,ConstPos (0),ConstPos(2))). For some benchmarkswith long input strings, it is still able to learn regular expressions to extract the desired sub-string, e.g. it learns a program to extract “NancyF” from the string “123456789,freehafer ,drew,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999”.Our system is currently not able to learn programs for benchmarks that require 4 or more Con-catoperations. Two such benchmarks are shown in Figure 7. The task of combining names inFigure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Fig-ure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in trainingwith programs of larger size. There are also a few interesting benchmarks where the R3NN modelsgets very close to learning the desired program. For example, for the task “ Bill Gates ”!“Mr.Bill Gates ”, it learns a program that generates “ Mr.Bill Gates ” (without the whitespace), and forthe task “617-444-5454” !“(617) 444-5454”, it learns a program that generates the string “(617444-5454”.11Published as a conference paper at ICLR 2017Inputv Output1 John James Paul John, James, and Paul.2 Tom Mike Bill Tom, Mike, and Bill.3 Marie Nina John Marie, Nina, and John.4Reggie Anna Adam Reggie, Anna, and Adam.Inputv Output1(425) 221 6767 425-221-67672 206.225.1298 206-225-12983 617-224-9874 617-224-98744 425.118.9281 425-118-9281(a) (b)Figure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transform-ing phone numbers to consistent format.7 R ELATED WORKWe have seen a renewed interest in recent years in the area of Program Induction and Synthesis.In the machine learning community, a number of promising neural architectures have been pro-posed to perform program induction . These methods have employed architectures inspired fromcomputation modules (Turing Machines, RAM) (Graves et al., 2014; Kurach et al., 2015; Reed &de Freitas, 2015; Neelakantan et al., 2015) or common data structures such as stacks used in manyalgorithms (Joulin & Mikolov, 2015). These approaches represent the atomic operations of the net-work in a differentiable form, which allows for efficient end-to-end training of a neural controller.However, unlike our approach that learns comprehensible complete programs, many of these ap-proaches learn only the program behavior ( i.e., they produce desired outputs on new input data).Some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunelet al., 2016) do learn interpretable programs but these techniques require learning a separate neuralnetwork model for each individual task, which is undesirable in many synthesis settings where wewould like to learn programs in real-time for a large number of tasks. Liang et al. (2010) restrictthe problem space with a probabilistic context-free grammar and introduce a new representationof programs based on combinatory logic, which allows for sharing sub-programs across multipletasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructuresof programs. Our approach, instead, uses neural architectures to condition the search space of pro-grams, and does not require additional step of representing program space using combinatory logicfor allowing sharing.The DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al.,2015). It has been used for many applications including synthesizing low-level bitvector implemen-tations (Solar-Lezama et al., 2005), Excel macros for data manipulation (Gulwani, 2011; Gulwaniet al., 2012), superoptimization by finding smaller equivalent loop bodies (Schkufza et al., 2013),protocol synthesis from scenarios (Udupa et al., 2013), synthesis of loop-free programs (Gulwaniet al., 2011), and automated feedback generation for programming assignments (Singh et al., 2013).The synthesis techniques proposed in the literature generally employ various search techniques in-cluding enumeration with pruning, symbolic constraint solving, and stochastic search, while sup-porting different forms of specifications including input-output examples, partial programs, programinvariants, and reference implementation.In this paper, we consider input-output example based specification over the hypothesis space de-fined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gul-wani, 2011). The key difference between our approach over previous techniques is that our systemis trained completely in an end-to-end fashion, while previous techniques require significant manualeffort to design heuristics for efficient search. There is some work on guiding the program search us-ing learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textualfeatures of examples (Menon et al., 2013). Moreover, their DSL consists of composition of about100 high-level text transformation functions such as count anddedup , whereas our DSL consists oftree structured programs over richer regular expression based substring constructs.There is also a recent line of work on learning probabilistic models of code from a large number ofcode repositories ( big code ) (Raychev et al., 2015; Bielik et al., 2016; Hindle et al., 2016), whichare then used for applications such as auto-completion of partial programs, inference of variableand method names, program repair, etc. These language models typically capture only the syntactic12Published as a conference paper at ICLR 2017properties of code, unlike our approach that also tries to capture the semantics to learn the desiredprogram. The work by Maddison & Tarlow (2014) addresses the problem of learning structuredgenerative models of source code but both their model and application domain are different fromours. Piech et al. (2015) use an NPM-RNN model to embed program ASTs, where a subtree ofthe AST rooted at a node n is represented by a matrix obtained by combining representations ofthe children of node n and the embedding matrix of the node n itself (which corresponds to itsfunctional behavior). The forward pass in our R3NN architecture from leaf nodes to the root nodeis, at a high-level, similar, but we use a distributed representation for each grammar symbol thatleads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass toensure all nodes in the tree encode global information about other nodes in the tree. Finally, theR3NN network is then used to incrementally build a tree to synthesize a program.The R3NN model employed in our work is related to several tree and graph structured neural net-works present in the NLP literature (Le & Zuidema, 2014; Paulus et al., 2014; Irsoy & Cardie, 2013).The Inside-Outside Recursive Neural Network (Le & Zuidema, 2014) in particular is most similar tothe R3NN, where they generate a parse tree incrementally by using global leaf-level representationsto determine which expansions in the parse tree to take next.8 C ONCLUSIONWe have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to con-struct a program incrementally based on given input-output examples. To do so, a new neuralarchitecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand apartial program tree into a full program tree. Its effectiveness at example-based program synthesisis demonstrated, even when the program has not been seen during training.These promising results open up a number of interesting directions for future research. For example,we took a supervised-learning approach here, assuming availability of target programs during train-ing. In some scenarios, we may only have access to an oracle that returns the desired output givenan input. In this case, reinforcement learning is a promising framework for program synthesis.REFERENCESAlur, Rajeev, Bod ́ık, Rastislav, Dallal, Eric, Fisman, Dana, Garg, Pranav, Juniwal, Garvit, Kress-Gazit, Hadas, Madhusudan, P., Martin, Milo M. K., Raghothaman, Mukund, Saha, Shamwaditya,Seshia, Sanjit A., Singh, Rishabh, Solar-Lezama, Armando, Torlak, Emina, and Udupa, Ab-hishek. Syntax-guided synthesis. In Dependable Software Systems Engineering , pp. 1–25. 2015.Bielik, Pavol, Raychev, Veselin, and Vechev, Martin T. PHOG: probabilistic model for code. InICML , pp. 2933–2942, 2016.Biermann, Alan W. The inference of regular lisp programs from examples. IEEE transactions onSystems, Man, and Cybernetics , 8(8):585–600, 1978.Bunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan. Adap-tive neural compilation. CoRR , abs/1605.07969, 2016. URL http://arxiv.org/abs/1605.07969 .Gaunt, Alexander L, Brockschmidt, Marc, Singh, Rishabh, Kushman, Nate, Kohli, Pushmeet, Tay-lor, Jonathan, and Tarlow, Daniel. Terpret: A probabilistic programming language for programinduction. arXiv preprint arXiv:1608.04428 , 2016.Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprintarXiv:1410.5401 , 2014.Gulwani, Sumit. Automating string processing in spreadsheets using input-output examples. InPOPL , pp. 317–330, 2011.Gulwani, Sumit, Jha, Susmit, Tiwari, Ashish, and Venkatesan, Ramarathnam. Synthesis of loop-freeprograms. In PLDI , pp. 62–73, 2011.Gulwani, Sumit, Harris, William, and Singh, Rishabh. Spreadsheet data manipulation using exam-ples. Communications of the ACM , Aug 2012.13Published as a conference paper at ICLR 2017Hindle, Abram, Barr, Earl T., Gabel, Mark, Su, Zhendong, and Devanbu, Premkumar T. On thenaturalness of software. Commun. ACM , 59(5):122–131, 2016.Irsoy, Orzan and Cardie, Claire. Bidirectional recursive neural networks for token-level labelingwith structure. In NIPS Deep Learning Workshop , 2013.Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrentnets. In NIPS , pp. 190–198, 2015.Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. In ICLR , 2014.Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXivpreprint arXiv:1511.06392 , 2015.Le, Phong and Zuidema, Willem. The inside-outside recursive neural network model for dependencyparsing. In EMNLP , pp. 729–739, 2014.Liang, Percy, Jordan, Michael I., and Klein, Dan. Learning programs: A hierarchical Bayesianapproach. In ICML , pp. 639–646, 2010.Maddison, Chris J and Tarlow, Daniel. Structured generative models of natural source code. InICML , pp. 649–657, 2014.Menon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler W., and Kalai, Adam. Amachine learning framework for programming by example. In ICML , pp. 187–195, 2013.Neelakantan, Arvind, Le, Quoc V , and Sutskever, Ilya. Neural programmer: Inducing latent pro-grams with gradient descent. arXiv preprint arXiv:1511.04834 , 2015.Paulus, Romain, Socher, Richard, and Manning, Christopher D. Global belief recursive neuralnetworks. pp. 2888–2896, 2014.Piech, Chris, Huang, Jonathan, Nguyen, Andy, Phulsuksombati, Mike, Sahami, Mehran, andGuibas, Leonidas J. Learning program embeddings to propagate feedback on student code. InICML , pp. 1093–1102, 2015.Raychev, Veselin, Vechev, Martin T., and Krause, Andreas. Predicting program properties from ”bigcode”. In POPL , pp. 111–124, 2015.Reed, Scott and de Freitas, Nando. Neural programmer-interpreters. arXiv preprintarXiv:1511.06279 , 2015.Riedel, Sebastian, Bosnjak, Matko, and Rockt ̈aschel, Tim. Programming with a differentiable forthinterpreter. CoRR , abs/1605.06640, 2016. URL http://arxiv.org/abs/1605.06640 .Schkufza, Eric, Sharma, Rahul, and Aiken, Alex. Stochastic superoptimization. In ASPLOS , pp.305–316, 2013.Singh, Rishabh and Solar-Lezama, Armando. Synthesizing data structure manipulations from sto-ryboards. In SIGSOFT FSE , pp. 289–299, 2011.Singh, Rishabh, Gulwani, Sumit, and Solar-Lezama, Armando. Automated feedback generation forintroductory programming assignments. In PLDI , pp. 15–26, 2013.Solar-Lezama, Armando. Program Synthesis By Sketching . PhD thesis, EECS Dept., UC Berkeley,2008.Solar-Lezama, Armando, Rabbah, Rodric, Bodik, Rastislav, and Ebcioglu, Kemal. Programming bysketching for bit-streaming programs. In PLDI , 2005.Summers, Phillip D. A methodology for lisp program construction from examples. Journal of theACM (JACM) , 24(1):161–175, 1977.Udupa, Abhishek, Raghavan, Arun, Deshmukh, Jyotirmoy V ., Mador-Haim, Sela, Martin, MiloM. K., and Alur, Rajeev. TRANSIT: specifying protocols with concolic snippets. In PLDI , pp.287–296, 2013.14Published as a conference paper at ICLR 2017JConcat(f1;;fn)Kv= Concat( Jf1Kv;;JfnKv)JConstStr(s)Kv=sJSubStr(v;pl;pr)Kv=v[JplKv::JprKv]JConstPos(k)Kv=k>0?k: len(s) +kJ(r;k;Start) Kv=Start ofkthmatch of r in vfrom beginning (end if k<0)J(r;k;End) Kv=End ofkthmatch of r in vfrom beginning (end if k<0)Figure 8: The semantics of the DSL for string transformations.Figure 9: The cross correlation encoder to encode a single input-output example.Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey.Grammar as a foreign language. In ICLR , 2015.A D OMAIN -SPECIFIC LANGUAGE FOR STRING TRANSFORMATIONSThe semantics of the DSL programs is shown in Figure 8. The semantics of a Concat expressionis to concatenate the results of recursively evaluating the constituent substring expressions fi. Thesemantics of ConstStr(s) is to simply return the constant string s. The semantics of a substringexpression is to first evaluate the two position logics plandprtop1andp2respectively, and thenreturn the substring corresponding to v[p1::p2]. We denote s[i::j]to denote the substring of stringsstarting at index i (inclusive) and ending at index j (exclusive), and len(s) denotes its length.The semantics of ConstPos(k) expression is to return kifk > 0or return len +k(ifk < 0).The semantics of position logic (r;k;Start) is to return the Start of kthmatch of r in vfrom thebeginning (if k>0) or from the end (if k<0).15
rJiK6J7Ng
HJeqWztlg
ICLR.cc/2017/conference/-/paper78/official/review
{"title": "My thoughts", "rating": "5: Marginally below acceptance threshold", "review": "The paper discusses a method to learn interpretable hierarchical template representations from given data. The authors illustrate their approach on binary images.\n\nThe paper presents a novel technique for extracting interpretable hierarchical template representations based on a small set of standard operations. It is then shown how a combination of those standard operations translates into a task equivalent to a boolean matrix factorization. This insight is then used to formulate a message passing technique which was shown to produce accurate results for these types of problems.\n\nSummary:\n\u2014\u2014\u2014\nThe paper presents an novel formulation for extracting hierarchical template representations that has not been discussed in that form. Unfortunately the experimental results are on smaller scale data and extension of the proposed algorithm to more natural images seems non-trivial to me.\n\nQuality: I think some of the techniques could be described more carefully to better convey the intuition.\nClarity: Some of the derivations and intuitions could be explained in more detail.\nOriginality: The suggested idea is reasonable but limited to binary data at this point in time.\nSignificance: Since the experimental setup is somewhat limited according to my opinion, significance is hard to judge.\n\nDetails:\n\u2014\u2014\u2014\n1. My main concern is related to the experimental evaluation. While the discussed approach is valuable, its application seems limited to binary images at this point in time. Can the authors comment?\n\n2. There are existing techniques to extract representations of images which the authors may want to mention, e.g., work based on grammars.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Hierarchical compositional feature learning
["Miguel Lazaro-Gredilla", "Yi Liu", "D. Scott Phoenix", "Dileep George"]
We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images. The building blocks are binary features defined hierarchically as a composition of some of the features in the layer immediately below, arranged in a particular manner. At a high level, HCN is similar to a sigmoid belief network with pooling. Inference and learning in HCN are very challenging and existing variational approximations do not work satisfactorily. A main contribution of this work is to show that both can be addressed using max-product message passing (MPMP) with a particular schedule (no EM required). Also, using MPMP as an inference engine for HCN makes new tasks simple: adding supervision information, classifying images, or performing inpainting all correspond to clamping some variables of the model to their known values and running MPMP on the rest. When used for classification, fast inference with HCN has exactly the same functional form as a convolutional neural network (CNN) with linear activations and binary weights. However, HCN’s features are qualitatively very different.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJeqWztlg
https://openreview.net/pdf?id=HJeqWztlg
https://openreview.net/forum?id=HJeqWztlg&noteId=rJiK6J7Ng
Under review as a conference paper at ICLR 2017HIERARCHICAL COMPOSITIONAL FEATURE LEARNINGMiguel L ́azaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep GeorgeVicariousSan Francisco, CA, USAfmiguel,yiliu,scott,dileep g@vicarious.comABSTRACTWe introduce the hierarchical compositional network (HCN), a directed generativemodel able to discover and disentangle, without supervision, the building blocksof a set of binary images. The building blocks are binary features defined hierar-chically as a composition of some of the features in the layer immediately below,arranged in a particular manner. At a high level, HCN is similar to a sigmoid beliefnetwork with pooling. Inference and learning in HCN are very challenging andexisting variational approximations do not work satisfactorily. A main contributionof this work is to show that both can be addressed using max-product messagepassing (MPMP) with a particular schedule (no EM required). Also, using MPMPas an inference engine for HCN makes new tasks simple: adding supervision infor-mation, classifying images, or performing inpainting all correspond to clampingsome variables of the model to their known values and running MPMP on therest. When used for classification, fast inference with HCN has exactly the samefunctional form as a convolutional neural network (CNN) with linear activationsand binary weights. However, HCN’s features are qualitatively very different.1 I NTRODUCTIONDeep neural networks coupled with the availability of vast amounts of data have proved verysuccessful over the last few years at visual discrimination (Goodfellow et al., 2014; Kingma &Welling, 2013; LeCun et al., 1998; Mnih & Gregor, 2014). A basic desire of deep architectures is todiscover the blocks –or features– that compose an image (or in general, a sensory input) at differentlevels of abstraction. Tasks that require some degree of image understanding can be performed moreeasily when using representations based on these building blocks.It would make intuitive sense that if we were to train one of the above models (particularly, thosethat are generative, such as variational autoencoders or generative adversarial networks) on imagescontaining, e.g. text, the learned features would be individual letters, since those are the buildingblocks of the provided images. In addition to matching our intuition, a model that realizes (from noisyraw pixels) that the building blocks of text are letters, and is able to extract a representation basedon those, has found meaningful structure in the data, and can prove it by being able to efficientlycompress text images. Figure 1: Features extracted by HCN. Left: from multiple images. Right: from a single image.1Under review as a conference paper at ICLR 2017However, this is not the case with existing incarnations of the above models1. We can see in Fig. 1the features recovered by the hierarchical compositional network (HCN) from a single image with nosupervision. They appear to be reasonable building blocks and are easy to find for a human. Yet weare not aware of any model that can perform such apparently simple recovery with no supervision.The HCN is a multilayer generative model with features defined at each layer. A feature (at a givenposition) is defined as the composition of features of the layer immediately below (by specifying theirrelative positions). To increase flexibility, the positions of the composing features can be perturbedslightly with respect to their default values (pooling). This results in a latent variable model, withsome of the latent variables (the features) being shared for all images while others (the pool states)are specific for each image.Comparing HCN with other generative models for images, we note that existing models tend tohave at least one of the following limitations: a) priors are not rich enough; typically, the sources ofvariation are not distributed among the layers of the network, and instead the generative model isexpressed as X=f(Y)+"whereYand"are two set of random variables, Xis the generated imageandf()is the network, i.e., the entire network behaves as a sophisticated deterministic function, b)the inference method (usually a separate recognition network) considers all the latent variables asindependent and does not solve explaining away, which leads to c) the learned features being notdirectly interpretable as reusable parts of the learned images.Although directed models enjoy important advantages such as the ability to represent causal semanticsand easy sampling mechanics, it is known that the “explaining away” phenomenon makes inferencedifficult in these models (Hinton et al., 2006). For this reason, representation learning efforts havelargely focused on undirected models (Salakhutdinov & Hinton, 2009), or have tried to avoid theproblem of explaining away by using complementary priors (Hinton et al., 2006).An important contribution of this work is to show that approximate inference using max-product mes-sage passing (MPMP) can learn features that are composable, interpretable and causally meaningful.It is also noteworthy that unlike previous works, we consider the weights (a.k.a. features) to be latentvariables and not parameters. Thus, we do not use separate expectation-maximization (EM) stages.Instead, we perform feature learning and pool state inference jointly as part of the same messagepassing loop.When augmented with supervision information, HCN can be used for classification, with inferenceand learning still being taken care of by a largely unmodified MPMP procedure. After training,discrimination can be achieved via a fast forward pass which turns out to have the same functionalform as a convolutional neural network (CNN).The rest of the paper is organized as follows: we describe the HCN model in Section 2; Section 3describes learning and inference in the single layer and multilayer HCNs; Section 4 tests the HCNexperimentally and we conclude with a brief discussion in Section 5.2 T HEHIERARCHICAL COMPOSITIONAL NETWORKThe HCN model is a discrete latent variable model that generates binary images by composing partswith different levels of abstraction. These parts are shared across all images. Training the modelinvolves learning such parts from data as well as how to combine them to create each concrete image.The HCN model can be expressed as a factor graph consisting only of three types of factors: AND,OR and POOL. These perform the obvious binary operations and will be defined more preciselylater in this section. The flexibility of the model allows training in supervised, semisupervisedand unsupervised settings, including missing image data. Once trained, the HCN can be used forclassification, missing value completion (pixel inference), sparsification, denoising, etc. See Fig. 2for a factor graph of the complete model. Additional details of each layer type are given in Fig. 4.At a high level, the HCN consists of a class layer at the top followed by alternating convolutionallayers and pooling layers. Inside each layer there is a sparsification , arepresentation andweights1Discriminative models find features that are good for classification, but not for generation (the trainingobjective is not constrained enough). Existing generative models also fail at recovering the building blocks of animage because they either a) mix positive and negative weights (which turns out to be critical for them beingtrainable via backpropagation) or b) lack inference mechanisms able to perform explaining away.2Under review as a conference paper at ICLR 2017Noisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerFigure 2: Factor graph of the HCN model when connected to multiple images Xn. The weights arethe only variables that entangle multiple images. The top variables are clamped to 1 and the bottomvariables are clamped to Xn. Additional details of each layer type are given in Fig. 4.(a.k.a. features), each of which is a multidimensional array of latent variables. The class layer selectsa category, and within it, which template is going to be used, producing the top-level sparsification. Asparsification is simply an encoding of the representation. A sparsification encodes a representationby specifying which features compose it and where they should be placed. The features are in turnstored in the form of weights . Convolutional layers deterministically combine the sparsification andthe weights of a layer to create its representation. Pooling layers randomly perturb the position of theactive elements (within a local neighborhood), introducing small variations in the process.2.1 B INARY CONVOLUTIONAL FEATURE LAYER (SINGLE -LAYER HCN)This layer can perform non-trivial feature learning on its own. We refer to it as a single-layer HCN.See Section 4.1 for the corresponding experiments.In this case, since there is no additional top-down structure, a binary image is created by placingfeatures at random locations of an image. Wherever two features overlap, they are ORed, i.e., if apixel of the binary image is activated due to two features, it is simply kept active. We will call Wtothe features, Sto the sparsification of the image (locations at which features are placed in that image)andXto the image. All of these variables are multidimensional binary arrays.The values of each of the involved arrays for a concrete example with a single-channel image is givenin Fig. 3 (to display Swe maximize over f). The corresponding diagram is shown in Fig. 4.In practice, each image Xis possibly multichannel, so it will have size FXHXWX, where thefirst dimension is the number of channels in the image and the other two are its height and width. Shas size FSHSWS, where the first dimension is the number of features and the other two areits height and width. We refer to an entry of SnasSfrc. Setting an entry Sfrc= 1corresponds toplacing feature fat position (r;c)in the final image X. The features themselves are stored in W,which has size FbelowWFWHWWW, where FW=FSandFbelowW =FX. I.e., each feature is a3Under review as a conference paper at ICLR 2017(a) ImageX (b) Sparsification S (c) FeaturesW (d) Reconstruction RFigure 3: Unsupervised analysis of image Xby a standalone convolutional feature layer of HCN.small 3D array containing one of the building blocks of the image. Those are placed in the positionsspecified by S, and the same block can be used many times at different positions, hence calling thislayer convolutional2.We can fully specify a probabilistic model for a binary images by adding independent priors overthe entries of SandWand connecting those to Xthrough a binary convolution and a noisy channel.The complete model isp(S) =Yfrcp(Sfrc) =YfrcpSfrcS(1pS)1Sfrcp(W) =Yafrcp(Wafrc) =YafrcpWafrcW (1pW)1Wafrc(1)p(XjR) =Yarcpnoisy(XarcjRarc)withR= bconv(S;W )andpnoisy(1j0) =p10;pnoisy(0j1) =p01;which depends on four scalar parameters pS;pW;p01;p10, controlling the density of features in theimage, of pixels in each feature, and the noise of the channel, respectively. The indexes a;f;r;c runover channels, features, rows and columns, respectively.We have used the binary convolution operator R= bconv(S;W ). A binary convolution performsthe same operation as a normal convolution, but operates on binary inputs and truncates outputsabove 1. Our latent variables are arranged as three- and four-dimensional arrays, so we defineR= bconv(S;W )to meanRa;:;:= min(1;Pfconv2D(Sf;:;:;Wa;f;:;:))where conv2D(;)is theusual 2D convolution operator, RandSare binary 3D arrays and Wis a binary 4D arrays. Theoperator min(1;)truncates values above 1 to 1, performing the ORing of two overlapping featurespreviously mentioned.The binary convolution (and hence model (1)) can be expressed as a factor graph, as seen in Fig. 4.The AND factor can be written as AND (bjt1;t2)and takes value 0 when the bottom variable bis thelogical AND of the two top variables t1andt2. It takes value1 in any other case. The OR factor,OR(bjt1:::;t M)takes value 0 when the bottom variable bis the logical OR of the Mtop variablest1:::;t M. It takes value1 in any other case.When this layer is not used in standalone mode, but inside a multilayer HCN, the variables Rareconnected to the pooling layer immediately below (instead of being connected to the image Xthroughthe noisy channel) and the variables Sare connected to the pooling layer immediately above (insteadof being connected to the prior).2.2 T HE CLASS LAYERWe assume for now that a single class is present in each image. We can then writelogp(c1;:::;c K) =POOL (c1;:::;c Kj1)whereckare mutually exclusive binary variables representing which of the Kcategories is present.In general, we define POOL (b1;:::;b Mjt= 1) =logMwhen exactly one of the bottom variablesb1;:::;b mtakes value 1 (we say that the pool is active), and POOL (b1;:::;b Mjt= 0) = 0 whenbm= 08m(the pool is off). It takes value 1 in any other case.2Additionally, the convolution implies the relations H X=HW+HS1and W X=WW+WS14Under review as a conference paper at ICLR 2017ABAAB(a) Binary convolutionR4R1s3R2R3s2s1w2ORw1AND (b) Feature layerR1R2R3s1s2s3POOLUOR (c) Pooling layerFigure 4: Diagrams of binary convolution and factor graph connectivity for 1D image.Within each category, we might have multiple templates. Each template corresponds to a differentvisual expression of the same conceptual category. For instance, if one category is furniture, wecould have a template for chair and another template for table. Each category has binary variablesrepresenting each of the Jtemplates,sjkwithj2[1:::J]. If a category is active, exactly one of itstemplates will be active. The joint probability of the templates is thenlogp(SLjc1;:::;c K) =Xklogp(s1k;:::;s Jkjck) =XkPOOL (s1k;:::;s Jkjck)where these JKvariables are arranged as a 3D array of size 11JKcalledSLwhich formsthe top-level sparsification of the template. A sample from SLwill always have exactly one elementset to 1 and the rest set to 0. Superscript Lis used to identify the layer to which a variable belongs.Since there are Llayers,SLis the top layer sparsification.2.3 T HE POOLING LAYERIn a multilayer HCN, feature layers and pooling layers appear in pairs. Inside layer `, the poolinglayer`is placed below the feature layer `.Since the convolutional feature layer is deterministic, any variation in the generated image mustcome from the pooling layers (and the final noisy channel). Each pooling layer shifts the positionof the active units in R`to produce the sparsification S`1in the layer below. This shifting is local,constrained to a region of size3HPWP1, the pooling window. When two or more active unitsinR`are shifted towards the same position in S`1, they result in a single activation, so the numberof active units in S`1is equal or smaller than the number of activations in R`.The above description should be enough to know how to sample S`1fromR`, but to provide arigorous probabilistic description, we need to introduce the intermediate binary variables Ur;c;f;r;c; ,which are associated to a shift r;cof the element R`frc. The HPWPintermediate variablesassociated to the same element R`frcare noted as U`:;:;frc. Since an element can be shifted to a singleposition per realization and only when it is active, the elements in U`:;:;frcare grouped into a poollogp(U`jR`) =Xfrclogp(U`:;:;frcjR`frc) =XfrcPOOL (U`:;:;frcjR`frc)and thenS`1can be obtained deterministically from U`by ORing the HPWPvari-ables ofUthat can potentially turn it on, logp(S`1jU`) =Pfr0c0logp(S`1fr0c0jU`) =Pfr0c0OR(S`1fr0c0jfUr;c;f;r;cgr0:r+r;c0:c+c):i.e., the above expression evaluates to 0 if theabove OR relations are satisfied and to 1 if they are not.3The described pooling window only allows for spatial perturbations, i.e., translational pooling. A moregeneral pooling layer would also pool in the third dimension (Goodfellow et al., 2013), across features, whichwould introduce richer variation and also impose a meaningful order in the feature indices. Though we donot pursue that option in this work, we note that this type of pooling is required for a rich hierarchical visualmodel. In fact, the pooling over templates that we special-cased in the description of the class layer would fit asa particular case of this third-dimension pooling.5Under review as a conference paper at ICLR 20172.4 J OINT PROBABILITY WITH MULTIPLE IMAGESThe observed binary image Xcorresponds to the bottommost sparsification4S0after it has traversed,element by element, a noisy channel with bit flip probabilities p(Xfrc= 1jS0frc= 0) =p10<0:5andp(Xfrc= 0jS0frc= 1) =p01<0:5. This defines p(XjS0).Finally, if we consider the weight variables to be independent Bernoulli variables with a fixed per-layer sparse prior p`Wthat are drawn once and shared for the generation of all images, we can writethe joint probability of multiple images, latent variables and weights aslogp(fXn;Hn;CngNn=1;fW`gL`=1) =LX`=1logp(W`) +NXn=1logp(XnjS0n) + logp(SLnjCn) + logp(Cn)+NXn=1LX`=1logp(S`1njU`n) + logp(U`njR`n) + logp(R`njS`n;W`)where we have collected all the category variables fckgof each image in Cnand the remaining latentvariables in Hnand for convenience. Each image uses its own copy of the latent variables, but theweights are shared across all images, which is the only coupling between the latent variables.The above expression shows how, in addition to factorizing over observations (conditionally on theweights), there is a factorization across layers. Furthermore, the previous description of each of theselayers implies that the entire model can be further reduced to small factors of type AND, OR andPOOL, involving only a few local variables each.Since we are interested in a point estimate of the features, given the images fXngNn=1and a (possiblyempty)5subset of the labels fCngNn=1, we will attempt to recover the maximum a posteriori6(MAP)configuration over features, sparsifications, and unknown labels. Note that for classification, selectingfW`gL`=1by maximizing the joint probability is very different from selecting it by maximizing adiscriminative loss of the type logp(fCngNn=1jfXngNn=1;fW`gL`=1), since in this case, all the priorinformation p(X)about the structure of the images is lost. This results in more samples beingrequired to achieve the same performance, and less invariance to new test data.Once learning is complete, we can fix fW`gL`=1, thus decoupling the model for every image, and useapproximate MAP inference to classify new test images, or to complete them if they include missingdata (while benefiting from the class label if it is available).Even though we only consider the single-class-per-image setting, the compositional property of thismodel means that we can train it on single-class images and then, without retraining, change the classlayer to make it generate (and therefore, recognize) combinations of classes in the same image.3 L EARNING AND INFERENCEWe will consider first the simpler case of a single-layer HCN, as described in Section 2.1. Then wewill tackle inference in the multilayer HCN.3.1 L EARNING IN SINGLE -LAYER HCNIn this case, for model (1), we want to findS;W= arg maxS;Wp(XjS;W )p(S)p(W): (2)This is a challenging problem even in simple cases. In fact, it can be easily shown that boolean matrixfactorization (BMF), a.k.a. boolean factor analysis, arises as a particular case of (2)in which the4Alternatively, one could introduce the noisy channel between R0andX, but that would be equivalent to ourformulation using a pooling window of size 111at the bottommost layer.5The model was described as unsupervised, but the class is represented in latent variable Cn, which can beclamped to its observed value, if it is available.6Note that we are performing MAP inference over discrete variables, where concerns about the arbitrarinessof MAP estimators (see e.g., (Beal, 2003) Chapter 1.3) do not apply.6Under review as a conference paper at ICLR 2017heights and widths of all the involved arrays are set to one. BMF is a decades-old problem proved tobe NP-complete in (Stockmeyer, 1975) and with applications in machine learning, communicationsand combinatorial optimization. Another related problem is non-negative matrix factorization (NMF)(Lee & Seung, 1999), but NMF is additive instead of ORing the contributions of multiple features,which is not desired here.One of the best-known heuristics to address BMF is the Asso (Miettinen et al., 2006). Unfortunately,it is not clear how to extend it to solve (2)because it relies on assumptions that no longer hold inthe present case. The variational bound of (Jaakkola & Jordan, 1999) addresses inference in thepresence of a noisy-OR gate and was successfully used in by ( ˇSingliar & Hauskrecht, 2006) to obtainthe noisy-OR component analysis (NOCA) algorithm. NOCA addresses a very similar problem to(2), the two differences being that a) the weight values are continuous between 0 and 1 (instead ofbinary) and b) there is no convolutional weight sharing among the features. NOCA can be modifiedto include the convolutional weight sharing, but it is not an entirely satisfactory solution to the featurelearning problem as we will show. We observed that the obtained local maxima, even after significanttweaking of parameters and learning schedule, are poor for problems of small to moderate size.We are not aware of other existing algorithms that can solve (2)for medium image sizes. The model(1)is directly amenable to mean-field inference without requiring the additional lower-bounding usedin NOCA, but we experimented with several optimization strategies (both based in mean field updatesand gradient-based) and the obtained local maxima were consistently worse than those of NOCA.In (Ravanbakhsh et al., 2015) it is shown that max-product message passing (MPMP) produces state-of-the-art results for the BMF problem, improving even on the performance of the Asso heuristic.We also address problem (2)using MPMP. Even though MPMP is not guaranteed to converge, wefound that with the right schedule, even with very slight or no damping, good solutions are foundconsistently.Model (1)can be expressed both as a directed Bayesian network or as a factor graph using onlyAND and OR factors, each involving a small number of local binary variables. Finding features andsparsifications can be cast as MAP inference7in this factor graph.MPMP is a local message passing technique to perform MAP inference in factor graphs. MPMP isexact on factor graphs without loops (trees). In loopy models, such as (1), it is an approximation withno convergence guarantees8, although convergence can be often attained by using some damping0<1. See Appendix C for a quick review on MPMP and Appendix D for the message updateequations required for the factors used in this work. Unlike Ravanbakhsh et al. (2015) which usesparallel updates and damping, we update each AND-OR factor9in turn, following a random in asequential schedule. This results in faster convergence with less or no damping.3.2 L EARNING IN MULTILAYER HCN ( UNSUPERVISED ,SEMISUPERVISED ,SUPERVISED )Despite its loopiness, we can also apply MPMP inference to the full, multilayer model and obtaingood results. The learning procedure iterates forward and backward passes (a precise description canbe found in Algorithm 1 below). In a forward pass, we proceed updating the bottom-up messages tovariables, starting from the bottom of the hierarchy (closer to the image) and going up to the classlayer. In a backward pass, we update the top-down messages visiting the variables in top-down order.Messages to the weight variables are updated only in the forward pass. We use damping only inthe update of the bottom-up messages from a pooling layer during the forward pass. The AND-ORfactors in the binary convolutional layer form trees, so we treat each of these trees as a single factor,since closed form message updates for them can be obtained. Those factors are updated once inrandom order inside each layer, i.e., sequentially. The pools at the class layer also from a tree, sowe also treat them as a single factor. The message updates for AND, OR and POOL factors followtrivially from their definition and are provided in Appendix D.7Note that we do not marginalize the latent variables (or the weights), but find their MAP configuration givena set of images. The sparse priors on the weights and the sparsification act as regularizers and prevent overfitting.8MPMP works by iterating fixed point equations of the dual of the Bethe free energy in the zero-temperaturelimit. Convexified dual variants (see Appendix C) are guaranteed to converge, but much slower.9Each OR factor is connected to several AND factors which together form a tree. We update the incomingand outgoing messages of the entire tree, since they can be computed exactly.7Under review as a conference paper at ICLR 2017After enough iterations, weights are set to 1 if their max-marginal difference is positive and to 0otherwise. This hard assignment converts some of the AND factors into a pass-through and the restin disconnections. Thus the weight assignments define the connectivity between S`andR`on a newgraph without ANDs. This is the learned model, that we can use to perform inference with with onnew test images.3.3 I NFERENCE IN MULTILAYER HCNTypical inference tasks are classification and missing value imputation. For classification, we findthata single forward pass seems good enough and further forward and backward passes are notneeded (see Algorithm 1 for the description of the forward and backward passes). For missingvalue imputation a single forward and top-down pass is enough. In order to achieve higher qualityexplaining-away10, we use a top-down pass instead of a backward pass. A top-down pass differs froma backward pass in that we replace step 5) with multiple alternating executions of steps 5) and 2).Therefore, it is not strictly a backward pass, but it proceeds top-down in the sense that once a layerhas been fully processed, it is never visited again.Interestingly, the functional form of the forward pass of an HCN is the same as that of a standardCNN, see Section 3.4, and therefore, an actual CNN can be used to perform a fast forward pass.Algorithm 1 Learning in Hierarchical Compositional NetworksInput: Hyperparameters p01;p10;fp`WgL`=1, datafXn;CngNn=1and network structure (pool and weightsizes for each layer)InitInitialize bottom-up messages and messages to fW`gto zero. Initialize the top-down messages to 1.Initialize messages to Wfrom its prior uniformly at random in (0:9pW;pW)to break symmetry. Set constantbottom-up messages to S0:m(S0frc) = (k1k0)Xfrc+k0withk1= log1p01p10andk0= logp011p10repeatForward pass:for`in1;:::;L do1) Update messages from OR to U`in parallel2) Update messages from POOL to R`in parallel with damping 3) Update messages from AND-OR to W`andS`sequentially in random orderend forUpdate message from all class layer POOLs to SL. Hard assign Cnif label is available.Backward pass:for`inL;:::; 1do4) Update messages from AND-OR to R`sequentially in random order5) Update messages from POOL to U`in parallel6) Update messages from OR to S`1in parallelend forCompute max-marginals by summing incoming messages to each variableuntil Fixed point or iteration limitreturn Max-marginal differences of S`,W`andR`3.4 A BOUT THE HCN FORWARD PASS3.4.1 F UNCTIONAL CORRESPONDENCE WITH CNNAfter a single forward pass in an HCN (considering that the weights are known, after training), weget an estimate of the MAP assignment over categories. In practice, this assignment seems goodenough for classification and further forward and backward passes are not needed.The functional form of the first forward pass can be simplified because of the initial strongly negativetop-down messages. Under these conditions, the message update rules applied to the pooling layersof the HCN have exactly the same functional form11as the max-pooling layer in a standard CNN.Similarly, applying the message update rule to the convolutional layers of the HCN —when the10To avoid symmetry problems, instead of making the distribution of each POOL perfectly uniform, wecan introduce slight random perturbations while keeping the highest probability value at the center of the pool.Doing so speeds up learning and favors centered backward pass reconstructions in the case of ties.11See the Appendix D for the update rules of the messages of each type factor.8Under review as a conference paper at ICLR 2017weights are known— has the same functional form as performing a standard (not binary) convolutionof the bottom-up messages with the weights, just like in a standard CNN. At the top, the max-marginalover categories will select the one with the template with the largest bottom-up message. This can berealized with max-pooling over the feature dimension as done in (Goodfellow et al., 2013), or closelyapproximated using a fully connected layer and a softmax, as in more standard CNNs.Simply put, the binary weights learned by an HCN can be copied to a standard CNN with linearactivations and they will both produce the same classification results when we applied to the bottom-upmessages (which are a positive scaling of the input data Xplus a constant).3.4.2 I NVARIANCE TO NOISE LEVELConsider we generate two data sets with the HCN model using the same weights but different bit-flipprobabilities. If those probabilities are known, would we use different classifiers for each dataset? Ifwe use a single forward pass, changing p01andp10produces a different monotonic transformation ofall the bottom-up messages at every layer of the hierarchy, but the selected category, which dependsonly on which variable has the largest value , will not change. So, with a single-pass classifier, ourclass estimation does not change with the noise level. This has the important implication that an HCNdoes not need to be trained with noisy data to classify noisy data. It can be trained with clean data(where there is more signal and learning parts is easier) and used on noisy data without retraining.4 E XPERIMENTSIn the following, we experimentally characterize both the single-layer and multilayer HCN.4.1 S INGLE -LAYER HCNWe create several synthetic (both noisy and noiseless) images in which the building blocks –orfeatures– are obvious to a human observer and check the ability of HCN to recover the them. Thetask is deceptively simple, and the existing the state of the art at this task, NOCA, is unable to solveseveral of our examples. Since the number of free parameters of the model is so small (3 in the caseof a symmetric noisy channel), these can be easily explored using grid search and selected usingmaximum likelihood. The sensitivity of the results to these parameters is small.HCN only requires straightforward MPMP with random order over the factors. For NOCA, initializingthe variational posterior over the latent sources and choosing how to interleave the updates of thisposterior with the update of the additional variational parameters ( ˇSingliar & Hauskrecht, 2006) istricky. For best results, during each E step we repeated the following 10 times: update the variationalparameters for 20 iterations and then update the variational posterior (which is a single closed formupdate). The M update also required an inner loop of variational parameter updating.The performance of HCN and NOCA can be assessed visually in Fig. 5. Column (a) shows eachinput image (these are single-image datasets) and the remaining columns show the features andreconstructions obtained by HCN and NOCA. In some of the input images we have added noisethat flips pixels with 3% probability. For HCN (respectively NOCA), we binarize all the beliefs(respectively, variational posteriors) from the [0;1]range by thresholding at 0.5 and then perform abinary convolution to obtain the reconstruction. Because noise is not included in this reconstruction,a cleaner image may be obtained, resulting in unsupervised denoising (rows 1 and 4 of Fig. 5).For a quantitative comparison, refer to Tab. 1. One algorithm-independent way to measure perfor-mance in the feature learning problem is to measure compression. It is known that to transmit a longsequence of Nbits which are 1 with probability p, we only need to transmit NH(p)bits with anoptimal encoding, where His the entropy. Thus sparse sequences compress well. In order to transmitthese images without loss, we need to transmit either one sequence of bits (encoding the imageitself) or three sequences of bits, one encoding the features, another encoding the sparsification and alast one encoding the errors between the reconstruction and the original image. Ideally, the secondmethod is more efficient, because the features are only sent once and the sparsification and errorssequences are much sparser than the original image. The ratio between the two is shown togetherwith running time on a single CPU. Unused features are discarded prior to computing compression.9Under review as a conference paper at ICLR 2017(a) Input image X (b) HCNW (c) HCNR (d) NOCAW (e) NOCARFigure 5: Features extracted by HCN and NOCA and image reconstructions for several datasets. Bestviewed on screen with zoom.(a) ImageX1 (b) ImageX2 (c) Batch HCN W (d) Online HCN W (e) Online HCN WFigure 6: Online learning. (a) and (b) show two sample input images; (c) and (d) show the featureslearned by batch and online HCN using 30 input images and 100 epochs; (e) shows the featureslearned by online HCN using 3000 input images and 1 epoch.10Under review as a conference paper at ICLR 2017Two bars Symbols Clean letters Noisy letters Textcomp. time comp. time comp. time comp. time comp. timeNOCA 84% 0.67 m 85% 92 m 98% 662 m 102% 716 m 84% 1222 mHCN 83% 0.07 m 11% 0.42 m 38% 25 m 73% 24 m 28% 31 mTable 1: Comp.: E(X)=(E(S)+E(W)+E(XR)), whereEis the encoding cost. Time: minutes.4.2 O NLINE LEARNINGThe above experiments use a batch formulation, i.e., consider simultaneously all the available trainingdatafXngN1. Since the amount of memory required to store the messages for MPMP scales linearlywith the training data, this imposes a practical limit in the number of images that can be processed.In order to overcome this limit, we also consider a particular message update schedule in whichthe messages outgoing from factors connected to each image and sparsification Xn;Snare updatedonly once and therefore, after an image has been processed, can be discarded, since they are neverreused. This effectively allows for online processing of images without memory scaling issues. Twomodifications are needed in practice for this to work well: first, instead of processing only oneimage at a time, better results are obtained if the factors of multiple images (forming a minibatch) areprocessed in random order. Second, a forgetting mechanism must be introduced to avoid accumulatingan unbounded amount of evidence from the processed minibatches.In detail, the beliefs of the variables Ware initialized uniformly at random in the interval (0:9pW;pW)(we call these initial beliefs b(0)prior(Wafrc)) and the beliefs of the variables fSngN1are initialized topS. The initial outgoing messages from all the AND-OR factors are set to 0. Since each factoris only processed once, this allows implementing MPMP without ever having to store messagesand only requiring to store beliefs. After processing the first minibatch using MPMP (with nodamping), we call the resulting belief over each of the weights b(0)post(Wafrc)(as it standard for MPMPof binary variables, beliefs are represented using max-marginal differences in log space). Insteadof processing the second minibatch using b(0)post(Wafrc)as the initial belief, we use b(1)prior(Wafrc) =b(0)post(Wafrc) + (1)b(0)prior(Wafrc), i.e., we “forget” part of the observed evidence, substituting itwith the prior. This introduces an exponential weighing in the contribution of each minibatch. Theforgetting factor is 2(0;1]specifies the amount of forgetting. When = 1this reduces to normalMPMP (no forgetting), when = 0, we completely forget the previous minibatch and process thenew one from scratch.Fig. 6 illustrates online learning. HCN is shown 30 small images containing 5 randomly chosen andrandomly placed characters with 3% flipping noise (see Fig. 5.(a) and (b) for two examples). Theyare learned in different manners. Fig. 5.(c): as a single batch with damping = 0:8and using 100epochs (each factor is updated 100 times); Fig. 6.(d): with minibatches of 5 images, no damping,= 0:95and using 100 epochs; Fig. 6.(e): with minibatches of 5 images, no damping, = 0:95,using a single epoch, but using 3000 images, so that running time is the same.4.3 M ULTI -LAYER HCN: SYNTHETIC DATAWe create a dataset by combining two traits: a) either a square (with four holes) or a circle and b)either a forward or a backward diagonal line. This results in four patterns, which we group in twocategories, see Fig. 7.(a). Categories are chosen such that we cannot decide the label of an imagebased only on one of the traits. The position of the traits is jittered within a 33window, and aftercombining them, the position of the individual pixels is also jittered by the same amount. Finally,each pixel is flipped with probability 103. This sampling procedure corresponds a 2-layer HCNsampling for some parameterization. We generate 100 training samples and 10000 test samples.4.3.1 U NSUPERVISED LEARNINGWe train the HCN as described in Section C on the 100 training data samples, not using any labelinformation. We do set the architecture of the network to match the architecture generating the data.There are four hyperparameters in this model, p01;p10;p1W;p2W. Their selection is not critical. We11Under review as a conference paper at ICLR 2017will choose them to match the generation process. MAP inference does discover and disentanglealmost perfectly the compositional parts at the first and second layers of the hierarchy, see Figs. 7.(b)and 8.(a). In 8.(a), rows correspond with templates and columns correspond to each of the featuresof the first layer. We can see that the model has “understood” the data and can be used to generatemore samples from it. Performing inference on this model is very challenging. We are not aware ofany previous method that can learn the features of this simple dataset with so few samples. In otherexperiments we verified that, using local message passing as opposed to gradient descent was criticalto successfully minimize our objective function. Results with the quality of Figs. 7.(b) and 8.(a) wereobtained in every run of the algorithm. Running time is 7 min on a single CPU.We can now clamp the discovered weights on both layers and use the fast forward pass to classifyeach training image as belonging to one of the four discovered templates (i.e., cluster them). Wecan even classify the test images as belonging to one of the four templates. When doing this, all theimages in the training set get assigned to the right template and only 60 out of 10000 images in thetest set do not get classified in the right cluster. This means that if we had just 4 labeled images, onefrom each cluster, we could perform 4-class minimally-supervised classification with just 0.6% error.Finally, we run a single forward-backward pass of the inference algorithm on a test image withmissing pixels. We show the inferred missing pixels in Fig. 7.(c). See also footnote 10.4.3.2 S UPERVISED LEARNINGNow we retrain the model using label information. This results in the same weights being found, butthis time the templates are properly grouped in two classes, as shown in Fig. 8.(a). Classification erroron the test set is very low, 0.07%. We now compare the HCN classification performance with that ofa CNN with the same functional form but trained discriminatively and with a standard CNN withReLU activations, a densely connected layer and softmax activation. We minimize the crossentropyloss function with L2regularization on the weights. The test errors are respectively 0.5% and 2.5%,much larger than those of HCN. We then consider versions of our training set with different levels ofpixel-flipping noise. The evolution of the test error is shown in Fig. 8.(c). For the competing methodswe needed many random restarts to obtain good results. Their regularization parameter was chosenbased on the test set performance.4.4 M ULTI -LAYER HCN: MNIST DATAWe turn now to a problem with real data, the MNIST database (LeCun et al., 1998), which contains60000/10000 training/testing images of size 2828. We want to generalize from very few samples,so we only use the first 40 digits of each category to train. We pre-process each image with afixed set of 16 oriented filters, so that the inputs are a 16-channel image. We use a 2-layer HCNwith 32 templates per class and 64 lower level features of size 2626and two layers of 33pooling,p1W= 0:001;p2W= 0:05. These values are set a priori, not optimized. Then we test onboth the regular MNIST training set and different corrupted versions12of it (same preprocessing12See Appendix E for examples of each corruption type.(a) 16 training samples and labels (b)W1, no supervision (c) Missing value imputationFigure 7: Samples from synthetic data and results from unsupervised learning tasks.12Under review as a conference paper at ICLR 2017(a) Supervised, unsupervised(top, bottom) W2(b)W1, discriminative training10-310-210-1Noise level in the input image0.000.050.100.150.200.250.300.350.400.45Test errorGenerative HCNDiscriminative HCNCNN (c) Effect of increased noise levelFigure 8: Discriminative vs. generative training and supervised vs. unsupervised generative training.(a) LearnedW1by HCN (b) LearnedW2by HCNCorruption HCN CNNNone 11.15% 9.53%Noise 20.69% 39.28%Border 16.97% 17.78%Patches 14.52% 16.27%Grid 68.52% 82.69%Line clutter 37.22% 55.77%Deletion 22.03% 25.05%(c) Test error with different cor-ruptionsFigure 9: First layer of weights learned by HCN and CNN on the preprocessed MNIST dataset.and no retraining). We follow the same preprocessing and procedure using a regular CNN withdiscriminative training and explore different regularizations, architectures and activation types, onlyfixing the pooling sizes and number of layers to match the HCN. We select the parameterization thatminimizes the error on the clean test set. This CNN uses 96 low level features. Results for all testsets are reported on Fig. 9.(c). It can be seen that HCN generalizes better. The weights of the firstlayer of the HCN after training are shown in Fig. 9.(a). Notice how HCN is able to discover reusableparts of digits.The training time of HCN scales exactly as that of a CNN. It is linear in each of its architecturalparameters: Number of images, number of pixels per image, features at each layer, size of thosefeatures, etc. However, the forward and backward passes of an HCN are more complex and optimizedcode for them is not readily available as it is for a CNN, so a significant constant factor separatesthe running times of both. Training time for MNIST is around 17 hours on a single CPU. The RAMrequired to store all the messages for 400 training images in MNIST goes up to around 150GB. Toscale to bigger training sets, an online extension (see Section 4.2) needs to be used.5 C ONCLUSIONS AND FUTURE WORKWe have described the HCN, a hierarchical feature model with a rich prior and provided a novelmethod to solve the challenging learning problem it poses. The model effectively learns convolutionalfeatures and is interpretable and flexible. The learned weights are binary, which is advantageous forstorage and computation purposes (Courbariaux et al., 2015; Han et al., 2015). Future work entailsadding more structure to the prior, leveraging more refined MAP inference techniques, exploringother update schedules and further exploiting the generalization-without-retraining capabilities ofthis model.13Under review as a conference paper at ICLR 2017REFERENCESMatthew James Beal. Variational algorithms for approximate Bayesian inference . University ofLondon London, 2003.Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neuralnetworks with binary weights during propagations. In Advances in Neural Information ProcessingSystems , pp. 3105–3113, 2015.Sanja Fidler, Marko Boben, and Ales Leonardis. Learning a hierarchical compositional shapevocabulary for multi-class object representation. arXiv preprint arXiv:1408.5516 , 2014.Amir Globerson and Tommi S Jaakkola. Fixing max-product: Convergent message passing algorithmsfor MAP LP-relaxations. In Advances in Neural Information Processing Systems , pp. 553–560,2008.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems , pp. 2672–2680, 2014.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxoutnetworks. arXiv preprint arXiv:1302.4389 , 2013.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015.Tom Heskes. Stable fixed points of loopy belief propagation are local minima of the bethe free energy.InAdvances in neural information processing systems , pp. 343–350, 2002.Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.Tommi S Jaakkola and Michael I Jordan. Variational probabilistic inference and the qmr-dt network.Journal of artificial intelligence research , 10:291–322, 1999.Ya Jin and Stuart Geman. Context and hierarchy in a probabilistic image model. In 2006 IEEEComputer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) , volume 2,pp. 2145–2152. IEEE, 2006.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques . MITpress, 2009.Vladimir Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PatternAnalysis and Machine Intelligence, IEEE Transactions on , 28(10):1568–1583, 2006.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factoriza-tion. Nature , 401(6755):788–791, 1999.Talya Meltzer, Amir Globerson, and Yair Weiss. Convergent message passing algorithms - a unifyingview. In Jeff A. Bilmes and Andrew Y . Ng (eds.), UAI, pp. 393–401, 2009.Pauli Miettinen, Taneli Mielik ̈ainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discretebasis problem. In European Conference on Principles of Data Mining and Knowledge Discovery ,pp. 335–346. Springer, 2006.Tom Minka et al. Divergence measures and message passing. Technical report, 2005.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.14Under review as a conference paper at ICLR 2017Ankit B Patel, Tan Nguyen, and Richard G Baraniuk. A probabilistic theory of deep learning. arXivpreprint arXiv:1504.00641 , 2015.Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference . 1988.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In ComputerVision Workshops (ICCV Workshops), 2011 IEEE International Conference on , pp. 689–690. IEEE,2011.Siamak Ravanbakhsh, Barnab ́as P ́oczos, and Russell Greiner. Boolean matrix factorization and noisycompletion via message passing. 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1, pp.3, 2009.Shimony. Finding MAPs for belief networks is NP-hard. AIJ: Artificial Intelligence , 68, 1994.Zhangzhang Si and Song-Chun Zhu. Learning and-or templates for object recognition and detection.IEEE transactions on pattern analysis and machine intelligence , 35(9):2189–2205, 2013.Tom ́aˇsˇSingliar and Milo ˇs Hauskrecht. Noisy-or component analysis and its application to linkanalysis. Journal of Machine Learning Research , 7(Oct):2189–2213, 2006.Larry J Stockmeyer. The set basis problem is NP-complete . IBM Thomas J. Watson ResearchDivision, 1975.Huayan Wang and Koller Daphne. Subproblem-tree calibration: A unified approach to max-productmessage passing. In Proceedings of the 30th International Conference on Machine Learning(ICML-13) , pp. 190–198, 2013.Tom ́aˇs Werner. A linear programming approach to max-sum problem: A review. IEEE Trans. PatternAnalysis and Machine Intelligence , 29(7):1165–1179, July 2007.Christopher KI Williams and Nicholas J Adams. Dts: dynamic trees. Advances in neural informationprocessing systems , pp. 634–640, 1999.Ying Nian Wu, Zhangzhang Si, Haifeng Gong, and Song-Chun Zhu. Learning active basis model forobject detection and recognition. International journal of computer vision , 90(2):198–235, 2010.Long Zhu, Yuanhao Chen, Yifei Lu, Chenxi Lin, and Alan Yuille. Max margin and/or graph learningfor parsing the human body. In Computer Vision and Pattern Recognition, 2008. CVPR 2008.IEEE Conference on , pp. 1–8. IEEE, 2008.Long Zhu, Yuanhao Chen, Alan Yuille, and William Freeman. Latent hierarchical structural learningfor object detection. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conferenceon, pp. 1062–1069. IEEE, 2010.15Under review as a conference paper at ICLR 2017A R ELATED WORKThere is a plethora of previous works that address hierarchical feature learning, usually in the settingor real-valued images, as opposed to binary ones: Fidler et al. (2014); Zhu et al. (2008; 2010);Wu et al. (2010); Si & Zhu (2013); Poon & Domingos (2011). Many of those works explicitlyuse AND-OR graphs, in the same spirit as our work. The most outstanding difference, however,between previous works and HCN is that HCN allows multiple features to overlap, thus creatingnew compositions. For instance, if feature H is a centered horizontal line and feature V is a centeredvertical line, HCN can create a new feature “cross” that combines both, and the fact that both areoverlapping and sharing a common active pixel (and many common inactive pixels) is properlyhandled. In contrast, previously cited models cannot overlap features, so they partition the input spaceand dedicate separate subtrees to each of them, and do so recursively. We can see in Figure 5, top row,how we can generate 25 different cross variations using only two features. This would not be possiblewith any of the cited models, which would need to span each combination as a separate feature. Thisfundamental difference makes HCN combinatorially more powerful, but also less tractable. Bothlearning and inference become harder because feature reuse introduces the well-known “explainingaway” phenomenon (Hinton et al., 2006).As a side, note the difference between the meaning of “OR” as used in the present work and inprevious works on AND-OR graphs: what they call “OR”, is what we term POOL (an exclusivebottom-up OR of elements), whereas HCN has a novel third type of gate, the “OR” connection (anon-exclusive, top-down OR of elements) to be able to handle explaining away. Standard AND-OR(or more clearly, AND-POOL) graphs lack the top-down ORing and therefore are not able to handleexplaining away.In the compositional hierarchies of Fidler et al. (2014), the lack of feature reuse allows for inferenceto be exact, since the graphical model is tree-like. Features are learned using a heuristic that relieson the exact inference, similar in spirit to EM. The AND-OR template learning methods of (Zhuet al., 2008; 2010) use respectively max-margin and incremental concave-convex procedures tooptimize a discriminative score. Therefore they require supervision (unlike HCN) and a tractableinference procedure (to make the discriminative score easy to optimize), which again is achievedby not allowing overlapping features. The sum-product networks (SPNs) of (Poon & Domingos,2011) express features as product nodes. In order to achieve feature overlapping, two product nodesspanning the same set of pixels (but with possibly different activation patterns) should be activesimultaneously. This would violate the consistency requirement of SPNs, making HCN a morecompact way to express feature overlap13(with the price to be paid being lack of exact inference).The AND-OR template (AOT) learning of (Wu et al., 2010) again cannot deal properly with thegeneration of superimposed features, having to create new features to handle every combination. InSection B we will compare AOT feature learning and HCN feature learning and check how theselimitations make AOT unable to disentangle the generative features.Grammars exclude the sharing of sub-parts among multiple objects or object parts in a parse of thescene (Jin & Geman, 2006), and they limit interpretations to single trees even if those are dynamic(Williams & Adams, 1999). Our graphical model formulation makes the sharing of lower-levelfeatures explicit by using local conditional probability distributions for multi-parent interactions, andallows for MAP configurations (i.e, the parse graphs) that are not trees.The deep rendering model (DRM) of Patel et al. (2015) is, to some extent, a continuous counterpartof the present work. Although DRMs allow for feature overlap, the semantics are different: in HCNthe amount of activation of a given pixel is the same whether there are one or many features (causes)activating it, whereas in DRM the activation is proportional to the number of causes. This means thatthe difference between DRM and HCN is analogous to the difference between principal componentanalysis and binary matrix factorization: while the first can be solved analytically, the second is hardand not analytically tractable. This results in DRM being more tractable, but less appropriate tohandle problems with binary events with multiple causes, such as the ones posed in this paper.Two popular approaches to handle learning in generative models, largely independent of the modelitself, are variational autoencoders (V AEs) and generative adversarial networks (GANs). We are not13An exponentially big SPN could indeed encode an HCN.16Under review as a conference paper at ICLR 2017(a) Filter bank (b) Training samples (c) HCN features (d) Features from Wuet al. (2010)Figure 10: Results of training a modified HCN on a grayscale image. A filter bank is convolved withthe input image to provide the bottom up messages to each channel of HCN. The filter bank sizes inthis simple example are adapted to match those of generation. As a benchmark, Wu et al. (2010) isused on the same data and is also given knowledge of the filter bank in use. Top row: 33filter size.Bottom row: 77filter size.aware of any work that uses a V AE or GAN with a generative model like HCN and such an option isunlikely to be straightforward.Most common V AEs rely on the reparameterization trick for variance reduction. However, thistrick cannot be applied to HCN due to the discrete nature of its variables, and alternative methodswould suffer from high variance. Another limitation of V AEs wrt HCN is that they perform a singlebottom-up pass and lack of explaining away: HCN combines top-down and bottom-up information inmultiple passes, isolating the parent cause of a given activation, instead of activating every possiblecause.GANs need to compute rWD(GW("))whereD()is the discriminative network and GW(")is agenerative network parameterized by the features W. In this case, not only Wis binary, but also thegenerated reconstructions at every layer, so the GAN formulation cannot be applied to HCN as-is.One could in principle relax the binary assumption of features and reconstructions and use the GANparadigm to train a neural network with sigmoidal activations, but it is unclear that the lack of binaryvariables will still produce proper disentangling (the convolutional extension of NOCA also has thisproblem due to the use of non-binary features and produces results that are inferior to HCN).B C OMBINING WITH GRAYSCALE PREPROCESSINGThe HCN is a binary model. However, to process real-valued data, it can be coupled with aninitial grayscale-to-binary preprocessing step to do feature detection. We tested this by generating agrayscale version of our toy data and then computing the bottom-up messages to S0by convolvingthe input image with a filter bank. This is roughly equivalent to replacing the noisy binary channelof HCN with a Gaussian channel. We used 16 preprocessing filters, which means that S0has 16channels. 200 training images (unsupervised) were used. Two filter sizes, 33and77were tested.We also run the AOT feature learning method of Wu et al. (2010) on the same data for comparison.The results of training on 200 training images (unsupervised) is provided in Figure 10. When thelarger filter is used, the diagonal bars are harder to identify so their disentangling is poorer.C M AX-PRODUCT MESSAGE PASSING (MPMP)The HCN model can be expressed both as a directed Bayesian network or as a factor graph usingonly POOL, AND, and OR factors, each involving a small number of local binary variables. Both17Under review as a conference paper at ICLR 2017learning and ulterior classification can be cast as MAP inference in this factor graph. Other tasks,such as filling in unknown image data can also be performed by MAP inference.MAP inference can be performed exactly on factor graphs without loops (trees) in linear time, but itis an NP-hard problem for arbitrary graphs (Shimony, 1994). The factor graph describing our modelis highly structured, but also very loopy.There is large body of works (Wang & Daphne, 2013; Meltzer et al., 2009; Globerson & Jaakkola,2008; Kolmogorov, 2006; Werner, 2007), addressing the problem of MAP inference in loopy factorgraphs. Perhaps the simplest of these methods is the max-product algorithm, a variant of dynamicprogramming proposed in (Pearl, 1988) to find the MAP configuration in trees.The max-product algorithm defines a set of messagesma!i(yi)going from each factor ato each ofits variablesyi. The sum of the messages incoming to a variable (yi) =Pa:yi2yama!i(yi)definesits approximate max-marginal14(yi). The max-product algorithm then proceeds by updating theoutgoing messages from each factor in turn so as to make the approximate max-marginals consistentwith that factor. This algorithm is not guaranteed to converge if there are loops in the graph, and if itdoes, it is not guaranteed to find the MAP configuration. Damping the updates of the factors has beenshown to improve convergence in loopy belief propagation (Heskes, 2002) and was justified as localdivergence minimization in (Minka et al., 2005). Using a damping factor 0<1for max-product,the update rule ismt+1a!i(yi) = (1)mta!i(yi) +maxyaniloga(yi;yani) +Xyj2yanimta!j(yj) + (3)and the original update rule is recovered for = 1. The valueis arbitrary and does not affect thealgorithm. We select it to make mt+1a!i(yi= 0) = 0 , so that messages can be stored as a single scalar.When storing messages in this way, their sum provides the max-marginal difference, which is enoughfor our purposes.Eq.(3)can be computed exactly for the three type of factors appearing in our graph, so messageupdating can be performed in closed form. Despite the graph of our model being very loopy, itturns out that a careful choice of message initialization, damping and parallel and sequential updatesproduces satisfactory results in our experiments. For further details about max-product inference andMAP inference via message passing in discrete graphical models we refer the reader to (Koller &Friedman, 2009).D M AX-PRODUCT MESSAGE UPDATES FOR AND, OR AND POOL FACTORSIn the following we provide the message update equations for the different types of factors used in themain paper. The messages are in normalized form: each message is a single scalar and correspondsto the difference between the unnormalized message value evaluated at 1 and the unnormalizedmessage value evaluated at 0. For each update we assume that the incoming messages mIN()for allthe variables of the factor are available. The incoming messages are the sum of all messages going tothat variable except for the one from the factor under consideration.The outgoing messages are well-defined even for 1 incoming messages, by taking the correspond-ing limit in the expressions below.D.1 AND FACTORBottom-up messagesmOUT(t1) = max(0;mIN(t2) +mIN(b))max(0;mIN(t2))mOUT(t2) = max(0;mIN(t1) +mIN(b))max(0;mIN(t1))Top-down messagemOUT(b) = min( mIN(t1) +mIN(t2);mIN(t1);mIN(t2))14The max-marginal of a variable in a factor graph gives the maximum value attainable in that factor graphfor each value of that variable.18Under review as a conference paper at ICLR 2017t1t2bAND(a) AND factorPOOLb1b2bMt (b) POOL factort1t2bORtM (c) OR factorFigure 11: Factors and variable labeling used in the message update equations.D.2 POOL FACTORBottom-up messagemOUT(t) = max( mIN(b1);:::; mIN(bM))logMTop-down messagesmOUT(bm) = min( mIN(t)logM;maxj6=mmIN(bj))D.3 OR FACTORBottom-up messagesmOUT(tm) = min( mIN(b) +Xj6=mmax(0;mIN(tj));max(0;mIN(ti))mIN(ti))withi= argmaxi6=mmIN(ti)Top-down messagemOUT(b) =mIN(ti) +Xj6=imax(0;mIN(tj))withi= argmaxmmIN(tm)19Under review as a conference paper at ICLR 2017E I MAGE CORRUPTION TYPE ILLUSTRATIONThe different types of image corruption used in Section 4.4 are shown in the following Figure:Figure 12: Different types of noise corruption used in Section 4.4.20
ByhVLlc7e
HJeqWztlg
ICLR.cc/2017/conference/-/paper78/official/review
{"title": "Interesting approach to compositional image modeling", "rating": "4: Ok but not good enough - rejection", "review": "This paper presents a generative model for binary images. Images are composed by placing a set of binary features at locations in the image. These features are OR'd together to produce an image. In a hierarchical variant, features/classes can have a set of possible templates, one of which can be active. Variables are defined to control which template is present in each layer. A joint probability distribution over both the feature appearance and instance/location variables is defined.\n\nOverall, the goal of this work is interesting -- it would be satisfying if semantically meaningful features could be extracted, allowing compositionality in a generative model of images. However, it isn't clear this would necessarily result from the proposed process.\nWhy would the learned features (building blocks) necessarily semantically meaningful? In the motivating example of text, rather than discovering letters, features could correspond to many other sub-units (parts of letters), or other features lacking direct semantic meaning.\n\nThe current instantiation of the model is limited. It models binary image patterns. The experiments are done on synthetic data and MNIST digits. The method recovers the structure and is effective at classification on synthetic data that are directly compositional. On the MNIST data, the test errors are quite large, and worse than a CNN except when synthetic data corruption is added. Further work to enhance the ability of the method to handle natural images or naturally occuring data variation would enhance the paper.\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Hierarchical compositional feature learning
["Miguel Lazaro-Gredilla", "Yi Liu", "D. Scott Phoenix", "Dileep George"]
We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images. The building blocks are binary features defined hierarchically as a composition of some of the features in the layer immediately below, arranged in a particular manner. At a high level, HCN is similar to a sigmoid belief network with pooling. Inference and learning in HCN are very challenging and existing variational approximations do not work satisfactorily. A main contribution of this work is to show that both can be addressed using max-product message passing (MPMP) with a particular schedule (no EM required). Also, using MPMP as an inference engine for HCN makes new tasks simple: adding supervision information, classifying images, or performing inpainting all correspond to clamping some variables of the model to their known values and running MPMP on the rest. When used for classification, fast inference with HCN has exactly the same functional form as a convolutional neural network (CNN) with linear activations and binary weights. However, HCN’s features are qualitatively very different.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJeqWztlg
https://openreview.net/pdf?id=HJeqWztlg
https://openreview.net/forum?id=HJeqWztlg&noteId=ByhVLlc7e
Under review as a conference paper at ICLR 2017HIERARCHICAL COMPOSITIONAL FEATURE LEARNINGMiguel L ́azaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep GeorgeVicariousSan Francisco, CA, USAfmiguel,yiliu,scott,dileep g@vicarious.comABSTRACTWe introduce the hierarchical compositional network (HCN), a directed generativemodel able to discover and disentangle, without supervision, the building blocksof a set of binary images. The building blocks are binary features defined hierar-chically as a composition of some of the features in the layer immediately below,arranged in a particular manner. At a high level, HCN is similar to a sigmoid beliefnetwork with pooling. Inference and learning in HCN are very challenging andexisting variational approximations do not work satisfactorily. A main contributionof this work is to show that both can be addressed using max-product messagepassing (MPMP) with a particular schedule (no EM required). Also, using MPMPas an inference engine for HCN makes new tasks simple: adding supervision infor-mation, classifying images, or performing inpainting all correspond to clampingsome variables of the model to their known values and running MPMP on therest. When used for classification, fast inference with HCN has exactly the samefunctional form as a convolutional neural network (CNN) with linear activationsand binary weights. However, HCN’s features are qualitatively very different.1 I NTRODUCTIONDeep neural networks coupled with the availability of vast amounts of data have proved verysuccessful over the last few years at visual discrimination (Goodfellow et al., 2014; Kingma &Welling, 2013; LeCun et al., 1998; Mnih & Gregor, 2014). A basic desire of deep architectures is todiscover the blocks –or features– that compose an image (or in general, a sensory input) at differentlevels of abstraction. Tasks that require some degree of image understanding can be performed moreeasily when using representations based on these building blocks.It would make intuitive sense that if we were to train one of the above models (particularly, thosethat are generative, such as variational autoencoders or generative adversarial networks) on imagescontaining, e.g. text, the learned features would be individual letters, since those are the buildingblocks of the provided images. In addition to matching our intuition, a model that realizes (from noisyraw pixels) that the building blocks of text are letters, and is able to extract a representation basedon those, has found meaningful structure in the data, and can prove it by being able to efficientlycompress text images. Figure 1: Features extracted by HCN. Left: from multiple images. Right: from a single image.1Under review as a conference paper at ICLR 2017However, this is not the case with existing incarnations of the above models1. We can see in Fig. 1the features recovered by the hierarchical compositional network (HCN) from a single image with nosupervision. They appear to be reasonable building blocks and are easy to find for a human. Yet weare not aware of any model that can perform such apparently simple recovery with no supervision.The HCN is a multilayer generative model with features defined at each layer. A feature (at a givenposition) is defined as the composition of features of the layer immediately below (by specifying theirrelative positions). To increase flexibility, the positions of the composing features can be perturbedslightly with respect to their default values (pooling). This results in a latent variable model, withsome of the latent variables (the features) being shared for all images while others (the pool states)are specific for each image.Comparing HCN with other generative models for images, we note that existing models tend tohave at least one of the following limitations: a) priors are not rich enough; typically, the sources ofvariation are not distributed among the layers of the network, and instead the generative model isexpressed as X=f(Y)+"whereYand"are two set of random variables, Xis the generated imageandf()is the network, i.e., the entire network behaves as a sophisticated deterministic function, b)the inference method (usually a separate recognition network) considers all the latent variables asindependent and does not solve explaining away, which leads to c) the learned features being notdirectly interpretable as reusable parts of the learned images.Although directed models enjoy important advantages such as the ability to represent causal semanticsand easy sampling mechanics, it is known that the “explaining away” phenomenon makes inferencedifficult in these models (Hinton et al., 2006). For this reason, representation learning efforts havelargely focused on undirected models (Salakhutdinov & Hinton, 2009), or have tried to avoid theproblem of explaining away by using complementary priors (Hinton et al., 2006).An important contribution of this work is to show that approximate inference using max-product mes-sage passing (MPMP) can learn features that are composable, interpretable and causally meaningful.It is also noteworthy that unlike previous works, we consider the weights (a.k.a. features) to be latentvariables and not parameters. Thus, we do not use separate expectation-maximization (EM) stages.Instead, we perform feature learning and pool state inference jointly as part of the same messagepassing loop.When augmented with supervision information, HCN can be used for classification, with inferenceand learning still being taken care of by a largely unmodified MPMP procedure. After training,discrimination can be achieved via a fast forward pass which turns out to have the same functionalform as a convolutional neural network (CNN).The rest of the paper is organized as follows: we describe the HCN model in Section 2; Section 3describes learning and inference in the single layer and multilayer HCNs; Section 4 tests the HCNexperimentally and we conclude with a brief discussion in Section 5.2 T HEHIERARCHICAL COMPOSITIONAL NETWORKThe HCN model is a discrete latent variable model that generates binary images by composing partswith different levels of abstraction. These parts are shared across all images. Training the modelinvolves learning such parts from data as well as how to combine them to create each concrete image.The HCN model can be expressed as a factor graph consisting only of three types of factors: AND,OR and POOL. These perform the obvious binary operations and will be defined more preciselylater in this section. The flexibility of the model allows training in supervised, semisupervisedand unsupervised settings, including missing image data. Once trained, the HCN can be used forclassification, missing value completion (pixel inference), sparsification, denoising, etc. See Fig. 2for a factor graph of the complete model. Additional details of each layer type are given in Fig. 4.At a high level, the HCN consists of a class layer at the top followed by alternating convolutionallayers and pooling layers. Inside each layer there is a sparsification , arepresentation andweights1Discriminative models find features that are good for classification, but not for generation (the trainingobjective is not constrained enough). Existing generative models also fail at recovering the building blocks of animage because they either a) mix positive and negative weights (which turns out to be critical for them beingtrainable via backpropagation) or b) lack inference mechanisms able to perform explaining away.2Under review as a conference paper at ICLR 2017Noisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerFigure 2: Factor graph of the HCN model when connected to multiple images Xn. The weights arethe only variables that entangle multiple images. The top variables are clamped to 1 and the bottomvariables are clamped to Xn. Additional details of each layer type are given in Fig. 4.(a.k.a. features), each of which is a multidimensional array of latent variables. The class layer selectsa category, and within it, which template is going to be used, producing the top-level sparsification. Asparsification is simply an encoding of the representation. A sparsification encodes a representationby specifying which features compose it and where they should be placed. The features are in turnstored in the form of weights . Convolutional layers deterministically combine the sparsification andthe weights of a layer to create its representation. Pooling layers randomly perturb the position of theactive elements (within a local neighborhood), introducing small variations in the process.2.1 B INARY CONVOLUTIONAL FEATURE LAYER (SINGLE -LAYER HCN)This layer can perform non-trivial feature learning on its own. We refer to it as a single-layer HCN.See Section 4.1 for the corresponding experiments.In this case, since there is no additional top-down structure, a binary image is created by placingfeatures at random locations of an image. Wherever two features overlap, they are ORed, i.e., if apixel of the binary image is activated due to two features, it is simply kept active. We will call Wtothe features, Sto the sparsification of the image (locations at which features are placed in that image)andXto the image. All of these variables are multidimensional binary arrays.The values of each of the involved arrays for a concrete example with a single-channel image is givenin Fig. 3 (to display Swe maximize over f). The corresponding diagram is shown in Fig. 4.In practice, each image Xis possibly multichannel, so it will have size FXHXWX, where thefirst dimension is the number of channels in the image and the other two are its height and width. Shas size FSHSWS, where the first dimension is the number of features and the other two areits height and width. We refer to an entry of SnasSfrc. Setting an entry Sfrc= 1corresponds toplacing feature fat position (r;c)in the final image X. The features themselves are stored in W,which has size FbelowWFWHWWW, where FW=FSandFbelowW =FX. I.e., each feature is a3Under review as a conference paper at ICLR 2017(a) ImageX (b) Sparsification S (c) FeaturesW (d) Reconstruction RFigure 3: Unsupervised analysis of image Xby a standalone convolutional feature layer of HCN.small 3D array containing one of the building blocks of the image. Those are placed in the positionsspecified by S, and the same block can be used many times at different positions, hence calling thislayer convolutional2.We can fully specify a probabilistic model for a binary images by adding independent priors overthe entries of SandWand connecting those to Xthrough a binary convolution and a noisy channel.The complete model isp(S) =Yfrcp(Sfrc) =YfrcpSfrcS(1pS)1Sfrcp(W) =Yafrcp(Wafrc) =YafrcpWafrcW (1pW)1Wafrc(1)p(XjR) =Yarcpnoisy(XarcjRarc)withR= bconv(S;W )andpnoisy(1j0) =p10;pnoisy(0j1) =p01;which depends on four scalar parameters pS;pW;p01;p10, controlling the density of features in theimage, of pixels in each feature, and the noise of the channel, respectively. The indexes a;f;r;c runover channels, features, rows and columns, respectively.We have used the binary convolution operator R= bconv(S;W ). A binary convolution performsthe same operation as a normal convolution, but operates on binary inputs and truncates outputsabove 1. Our latent variables are arranged as three- and four-dimensional arrays, so we defineR= bconv(S;W )to meanRa;:;:= min(1;Pfconv2D(Sf;:;:;Wa;f;:;:))where conv2D(;)is theusual 2D convolution operator, RandSare binary 3D arrays and Wis a binary 4D arrays. Theoperator min(1;)truncates values above 1 to 1, performing the ORing of two overlapping featurespreviously mentioned.The binary convolution (and hence model (1)) can be expressed as a factor graph, as seen in Fig. 4.The AND factor can be written as AND (bjt1;t2)and takes value 0 when the bottom variable bis thelogical AND of the two top variables t1andt2. It takes value1 in any other case. The OR factor,OR(bjt1:::;t M)takes value 0 when the bottom variable bis the logical OR of the Mtop variablest1:::;t M. It takes value1 in any other case.When this layer is not used in standalone mode, but inside a multilayer HCN, the variables Rareconnected to the pooling layer immediately below (instead of being connected to the image Xthroughthe noisy channel) and the variables Sare connected to the pooling layer immediately above (insteadof being connected to the prior).2.2 T HE CLASS LAYERWe assume for now that a single class is present in each image. We can then writelogp(c1;:::;c K) =POOL (c1;:::;c Kj1)whereckare mutually exclusive binary variables representing which of the Kcategories is present.In general, we define POOL (b1;:::;b Mjt= 1) =logMwhen exactly one of the bottom variablesb1;:::;b mtakes value 1 (we say that the pool is active), and POOL (b1;:::;b Mjt= 0) = 0 whenbm= 08m(the pool is off). It takes value 1 in any other case.2Additionally, the convolution implies the relations H X=HW+HS1and W X=WW+WS14Under review as a conference paper at ICLR 2017ABAAB(a) Binary convolutionR4R1s3R2R3s2s1w2ORw1AND (b) Feature layerR1R2R3s1s2s3POOLUOR (c) Pooling layerFigure 4: Diagrams of binary convolution and factor graph connectivity for 1D image.Within each category, we might have multiple templates. Each template corresponds to a differentvisual expression of the same conceptual category. For instance, if one category is furniture, wecould have a template for chair and another template for table. Each category has binary variablesrepresenting each of the Jtemplates,sjkwithj2[1:::J]. If a category is active, exactly one of itstemplates will be active. The joint probability of the templates is thenlogp(SLjc1;:::;c K) =Xklogp(s1k;:::;s Jkjck) =XkPOOL (s1k;:::;s Jkjck)where these JKvariables are arranged as a 3D array of size 11JKcalledSLwhich formsthe top-level sparsification of the template. A sample from SLwill always have exactly one elementset to 1 and the rest set to 0. Superscript Lis used to identify the layer to which a variable belongs.Since there are Llayers,SLis the top layer sparsification.2.3 T HE POOLING LAYERIn a multilayer HCN, feature layers and pooling layers appear in pairs. Inside layer `, the poolinglayer`is placed below the feature layer `.Since the convolutional feature layer is deterministic, any variation in the generated image mustcome from the pooling layers (and the final noisy channel). Each pooling layer shifts the positionof the active units in R`to produce the sparsification S`1in the layer below. This shifting is local,constrained to a region of size3HPWP1, the pooling window. When two or more active unitsinR`are shifted towards the same position in S`1, they result in a single activation, so the numberof active units in S`1is equal or smaller than the number of activations in R`.The above description should be enough to know how to sample S`1fromR`, but to provide arigorous probabilistic description, we need to introduce the intermediate binary variables Ur;c;f;r;c; ,which are associated to a shift r;cof the element R`frc. The HPWPintermediate variablesassociated to the same element R`frcare noted as U`:;:;frc. Since an element can be shifted to a singleposition per realization and only when it is active, the elements in U`:;:;frcare grouped into a poollogp(U`jR`) =Xfrclogp(U`:;:;frcjR`frc) =XfrcPOOL (U`:;:;frcjR`frc)and thenS`1can be obtained deterministically from U`by ORing the HPWPvari-ables ofUthat can potentially turn it on, logp(S`1jU`) =Pfr0c0logp(S`1fr0c0jU`) =Pfr0c0OR(S`1fr0c0jfUr;c;f;r;cgr0:r+r;c0:c+c):i.e., the above expression evaluates to 0 if theabove OR relations are satisfied and to 1 if they are not.3The described pooling window only allows for spatial perturbations, i.e., translational pooling. A moregeneral pooling layer would also pool in the third dimension (Goodfellow et al., 2013), across features, whichwould introduce richer variation and also impose a meaningful order in the feature indices. Though we donot pursue that option in this work, we note that this type of pooling is required for a rich hierarchical visualmodel. In fact, the pooling over templates that we special-cased in the description of the class layer would fit asa particular case of this third-dimension pooling.5Under review as a conference paper at ICLR 20172.4 J OINT PROBABILITY WITH MULTIPLE IMAGESThe observed binary image Xcorresponds to the bottommost sparsification4S0after it has traversed,element by element, a noisy channel with bit flip probabilities p(Xfrc= 1jS0frc= 0) =p10<0:5andp(Xfrc= 0jS0frc= 1) =p01<0:5. This defines p(XjS0).Finally, if we consider the weight variables to be independent Bernoulli variables with a fixed per-layer sparse prior p`Wthat are drawn once and shared for the generation of all images, we can writethe joint probability of multiple images, latent variables and weights aslogp(fXn;Hn;CngNn=1;fW`gL`=1) =LX`=1logp(W`) +NXn=1logp(XnjS0n) + logp(SLnjCn) + logp(Cn)+NXn=1LX`=1logp(S`1njU`n) + logp(U`njR`n) + logp(R`njS`n;W`)where we have collected all the category variables fckgof each image in Cnand the remaining latentvariables in Hnand for convenience. Each image uses its own copy of the latent variables, but theweights are shared across all images, which is the only coupling between the latent variables.The above expression shows how, in addition to factorizing over observations (conditionally on theweights), there is a factorization across layers. Furthermore, the previous description of each of theselayers implies that the entire model can be further reduced to small factors of type AND, OR andPOOL, involving only a few local variables each.Since we are interested in a point estimate of the features, given the images fXngNn=1and a (possiblyempty)5subset of the labels fCngNn=1, we will attempt to recover the maximum a posteriori6(MAP)configuration over features, sparsifications, and unknown labels. Note that for classification, selectingfW`gL`=1by maximizing the joint probability is very different from selecting it by maximizing adiscriminative loss of the type logp(fCngNn=1jfXngNn=1;fW`gL`=1), since in this case, all the priorinformation p(X)about the structure of the images is lost. This results in more samples beingrequired to achieve the same performance, and less invariance to new test data.Once learning is complete, we can fix fW`gL`=1, thus decoupling the model for every image, and useapproximate MAP inference to classify new test images, or to complete them if they include missingdata (while benefiting from the class label if it is available).Even though we only consider the single-class-per-image setting, the compositional property of thismodel means that we can train it on single-class images and then, without retraining, change the classlayer to make it generate (and therefore, recognize) combinations of classes in the same image.3 L EARNING AND INFERENCEWe will consider first the simpler case of a single-layer HCN, as described in Section 2.1. Then wewill tackle inference in the multilayer HCN.3.1 L EARNING IN SINGLE -LAYER HCNIn this case, for model (1), we want to findS;W= arg maxS;Wp(XjS;W )p(S)p(W): (2)This is a challenging problem even in simple cases. In fact, it can be easily shown that boolean matrixfactorization (BMF), a.k.a. boolean factor analysis, arises as a particular case of (2)in which the4Alternatively, one could introduce the noisy channel between R0andX, but that would be equivalent to ourformulation using a pooling window of size 111at the bottommost layer.5The model was described as unsupervised, but the class is represented in latent variable Cn, which can beclamped to its observed value, if it is available.6Note that we are performing MAP inference over discrete variables, where concerns about the arbitrarinessof MAP estimators (see e.g., (Beal, 2003) Chapter 1.3) do not apply.6Under review as a conference paper at ICLR 2017heights and widths of all the involved arrays are set to one. BMF is a decades-old problem proved tobe NP-complete in (Stockmeyer, 1975) and with applications in machine learning, communicationsand combinatorial optimization. Another related problem is non-negative matrix factorization (NMF)(Lee & Seung, 1999), but NMF is additive instead of ORing the contributions of multiple features,which is not desired here.One of the best-known heuristics to address BMF is the Asso (Miettinen et al., 2006). Unfortunately,it is not clear how to extend it to solve (2)because it relies on assumptions that no longer hold inthe present case. The variational bound of (Jaakkola & Jordan, 1999) addresses inference in thepresence of a noisy-OR gate and was successfully used in by ( ˇSingliar & Hauskrecht, 2006) to obtainthe noisy-OR component analysis (NOCA) algorithm. NOCA addresses a very similar problem to(2), the two differences being that a) the weight values are continuous between 0 and 1 (instead ofbinary) and b) there is no convolutional weight sharing among the features. NOCA can be modifiedto include the convolutional weight sharing, but it is not an entirely satisfactory solution to the featurelearning problem as we will show. We observed that the obtained local maxima, even after significanttweaking of parameters and learning schedule, are poor for problems of small to moderate size.We are not aware of other existing algorithms that can solve (2)for medium image sizes. The model(1)is directly amenable to mean-field inference without requiring the additional lower-bounding usedin NOCA, but we experimented with several optimization strategies (both based in mean field updatesand gradient-based) and the obtained local maxima were consistently worse than those of NOCA.In (Ravanbakhsh et al., 2015) it is shown that max-product message passing (MPMP) produces state-of-the-art results for the BMF problem, improving even on the performance of the Asso heuristic.We also address problem (2)using MPMP. Even though MPMP is not guaranteed to converge, wefound that with the right schedule, even with very slight or no damping, good solutions are foundconsistently.Model (1)can be expressed both as a directed Bayesian network or as a factor graph using onlyAND and OR factors, each involving a small number of local binary variables. Finding features andsparsifications can be cast as MAP inference7in this factor graph.MPMP is a local message passing technique to perform MAP inference in factor graphs. MPMP isexact on factor graphs without loops (trees). In loopy models, such as (1), it is an approximation withno convergence guarantees8, although convergence can be often attained by using some damping0<1. See Appendix C for a quick review on MPMP and Appendix D for the message updateequations required for the factors used in this work. Unlike Ravanbakhsh et al. (2015) which usesparallel updates and damping, we update each AND-OR factor9in turn, following a random in asequential schedule. This results in faster convergence with less or no damping.3.2 L EARNING IN MULTILAYER HCN ( UNSUPERVISED ,SEMISUPERVISED ,SUPERVISED )Despite its loopiness, we can also apply MPMP inference to the full, multilayer model and obtaingood results. The learning procedure iterates forward and backward passes (a precise description canbe found in Algorithm 1 below). In a forward pass, we proceed updating the bottom-up messages tovariables, starting from the bottom of the hierarchy (closer to the image) and going up to the classlayer. In a backward pass, we update the top-down messages visiting the variables in top-down order.Messages to the weight variables are updated only in the forward pass. We use damping only inthe update of the bottom-up messages from a pooling layer during the forward pass. The AND-ORfactors in the binary convolutional layer form trees, so we treat each of these trees as a single factor,since closed form message updates for them can be obtained. Those factors are updated once inrandom order inside each layer, i.e., sequentially. The pools at the class layer also from a tree, sowe also treat them as a single factor. The message updates for AND, OR and POOL factors followtrivially from their definition and are provided in Appendix D.7Note that we do not marginalize the latent variables (or the weights), but find their MAP configuration givena set of images. The sparse priors on the weights and the sparsification act as regularizers and prevent overfitting.8MPMP works by iterating fixed point equations of the dual of the Bethe free energy in the zero-temperaturelimit. Convexified dual variants (see Appendix C) are guaranteed to converge, but much slower.9Each OR factor is connected to several AND factors which together form a tree. We update the incomingand outgoing messages of the entire tree, since they can be computed exactly.7Under review as a conference paper at ICLR 2017After enough iterations, weights are set to 1 if their max-marginal difference is positive and to 0otherwise. This hard assignment converts some of the AND factors into a pass-through and the restin disconnections. Thus the weight assignments define the connectivity between S`andR`on a newgraph without ANDs. This is the learned model, that we can use to perform inference with with onnew test images.3.3 I NFERENCE IN MULTILAYER HCNTypical inference tasks are classification and missing value imputation. For classification, we findthata single forward pass seems good enough and further forward and backward passes are notneeded (see Algorithm 1 for the description of the forward and backward passes). For missingvalue imputation a single forward and top-down pass is enough. In order to achieve higher qualityexplaining-away10, we use a top-down pass instead of a backward pass. A top-down pass differs froma backward pass in that we replace step 5) with multiple alternating executions of steps 5) and 2).Therefore, it is not strictly a backward pass, but it proceeds top-down in the sense that once a layerhas been fully processed, it is never visited again.Interestingly, the functional form of the forward pass of an HCN is the same as that of a standardCNN, see Section 3.4, and therefore, an actual CNN can be used to perform a fast forward pass.Algorithm 1 Learning in Hierarchical Compositional NetworksInput: Hyperparameters p01;p10;fp`WgL`=1, datafXn;CngNn=1and network structure (pool and weightsizes for each layer)InitInitialize bottom-up messages and messages to fW`gto zero. Initialize the top-down messages to 1.Initialize messages to Wfrom its prior uniformly at random in (0:9pW;pW)to break symmetry. Set constantbottom-up messages to S0:m(S0frc) = (k1k0)Xfrc+k0withk1= log1p01p10andk0= logp011p10repeatForward pass:for`in1;:::;L do1) Update messages from OR to U`in parallel2) Update messages from POOL to R`in parallel with damping 3) Update messages from AND-OR to W`andS`sequentially in random orderend forUpdate message from all class layer POOLs to SL. Hard assign Cnif label is available.Backward pass:for`inL;:::; 1do4) Update messages from AND-OR to R`sequentially in random order5) Update messages from POOL to U`in parallel6) Update messages from OR to S`1in parallelend forCompute max-marginals by summing incoming messages to each variableuntil Fixed point or iteration limitreturn Max-marginal differences of S`,W`andR`3.4 A BOUT THE HCN FORWARD PASS3.4.1 F UNCTIONAL CORRESPONDENCE WITH CNNAfter a single forward pass in an HCN (considering that the weights are known, after training), weget an estimate of the MAP assignment over categories. In practice, this assignment seems goodenough for classification and further forward and backward passes are not needed.The functional form of the first forward pass can be simplified because of the initial strongly negativetop-down messages. Under these conditions, the message update rules applied to the pooling layersof the HCN have exactly the same functional form11as the max-pooling layer in a standard CNN.Similarly, applying the message update rule to the convolutional layers of the HCN —when the10To avoid symmetry problems, instead of making the distribution of each POOL perfectly uniform, wecan introduce slight random perturbations while keeping the highest probability value at the center of the pool.Doing so speeds up learning and favors centered backward pass reconstructions in the case of ties.11See the Appendix D for the update rules of the messages of each type factor.8Under review as a conference paper at ICLR 2017weights are known— has the same functional form as performing a standard (not binary) convolutionof the bottom-up messages with the weights, just like in a standard CNN. At the top, the max-marginalover categories will select the one with the template with the largest bottom-up message. This can berealized with max-pooling over the feature dimension as done in (Goodfellow et al., 2013), or closelyapproximated using a fully connected layer and a softmax, as in more standard CNNs.Simply put, the binary weights learned by an HCN can be copied to a standard CNN with linearactivations and they will both produce the same classification results when we applied to the bottom-upmessages (which are a positive scaling of the input data Xplus a constant).3.4.2 I NVARIANCE TO NOISE LEVELConsider we generate two data sets with the HCN model using the same weights but different bit-flipprobabilities. If those probabilities are known, would we use different classifiers for each dataset? Ifwe use a single forward pass, changing p01andp10produces a different monotonic transformation ofall the bottom-up messages at every layer of the hierarchy, but the selected category, which dependsonly on which variable has the largest value , will not change. So, with a single-pass classifier, ourclass estimation does not change with the noise level. This has the important implication that an HCNdoes not need to be trained with noisy data to classify noisy data. It can be trained with clean data(where there is more signal and learning parts is easier) and used on noisy data without retraining.4 E XPERIMENTSIn the following, we experimentally characterize both the single-layer and multilayer HCN.4.1 S INGLE -LAYER HCNWe create several synthetic (both noisy and noiseless) images in which the building blocks –orfeatures– are obvious to a human observer and check the ability of HCN to recover the them. Thetask is deceptively simple, and the existing the state of the art at this task, NOCA, is unable to solveseveral of our examples. Since the number of free parameters of the model is so small (3 in the caseof a symmetric noisy channel), these can be easily explored using grid search and selected usingmaximum likelihood. The sensitivity of the results to these parameters is small.HCN only requires straightforward MPMP with random order over the factors. For NOCA, initializingthe variational posterior over the latent sources and choosing how to interleave the updates of thisposterior with the update of the additional variational parameters ( ˇSingliar & Hauskrecht, 2006) istricky. For best results, during each E step we repeated the following 10 times: update the variationalparameters for 20 iterations and then update the variational posterior (which is a single closed formupdate). The M update also required an inner loop of variational parameter updating.The performance of HCN and NOCA can be assessed visually in Fig. 5. Column (a) shows eachinput image (these are single-image datasets) and the remaining columns show the features andreconstructions obtained by HCN and NOCA. In some of the input images we have added noisethat flips pixels with 3% probability. For HCN (respectively NOCA), we binarize all the beliefs(respectively, variational posteriors) from the [0;1]range by thresholding at 0.5 and then perform abinary convolution to obtain the reconstruction. Because noise is not included in this reconstruction,a cleaner image may be obtained, resulting in unsupervised denoising (rows 1 and 4 of Fig. 5).For a quantitative comparison, refer to Tab. 1. One algorithm-independent way to measure perfor-mance in the feature learning problem is to measure compression. It is known that to transmit a longsequence of Nbits which are 1 with probability p, we only need to transmit NH(p)bits with anoptimal encoding, where His the entropy. Thus sparse sequences compress well. In order to transmitthese images without loss, we need to transmit either one sequence of bits (encoding the imageitself) or three sequences of bits, one encoding the features, another encoding the sparsification and alast one encoding the errors between the reconstruction and the original image. Ideally, the secondmethod is more efficient, because the features are only sent once and the sparsification and errorssequences are much sparser than the original image. The ratio between the two is shown togetherwith running time on a single CPU. Unused features are discarded prior to computing compression.9Under review as a conference paper at ICLR 2017(a) Input image X (b) HCNW (c) HCNR (d) NOCAW (e) NOCARFigure 5: Features extracted by HCN and NOCA and image reconstructions for several datasets. Bestviewed on screen with zoom.(a) ImageX1 (b) ImageX2 (c) Batch HCN W (d) Online HCN W (e) Online HCN WFigure 6: Online learning. (a) and (b) show two sample input images; (c) and (d) show the featureslearned by batch and online HCN using 30 input images and 100 epochs; (e) shows the featureslearned by online HCN using 3000 input images and 1 epoch.10Under review as a conference paper at ICLR 2017Two bars Symbols Clean letters Noisy letters Textcomp. time comp. time comp. time comp. time comp. timeNOCA 84% 0.67 m 85% 92 m 98% 662 m 102% 716 m 84% 1222 mHCN 83% 0.07 m 11% 0.42 m 38% 25 m 73% 24 m 28% 31 mTable 1: Comp.: E(X)=(E(S)+E(W)+E(XR)), whereEis the encoding cost. Time: minutes.4.2 O NLINE LEARNINGThe above experiments use a batch formulation, i.e., consider simultaneously all the available trainingdatafXngN1. Since the amount of memory required to store the messages for MPMP scales linearlywith the training data, this imposes a practical limit in the number of images that can be processed.In order to overcome this limit, we also consider a particular message update schedule in whichthe messages outgoing from factors connected to each image and sparsification Xn;Snare updatedonly once and therefore, after an image has been processed, can be discarded, since they are neverreused. This effectively allows for online processing of images without memory scaling issues. Twomodifications are needed in practice for this to work well: first, instead of processing only oneimage at a time, better results are obtained if the factors of multiple images (forming a minibatch) areprocessed in random order. Second, a forgetting mechanism must be introduced to avoid accumulatingan unbounded amount of evidence from the processed minibatches.In detail, the beliefs of the variables Ware initialized uniformly at random in the interval (0:9pW;pW)(we call these initial beliefs b(0)prior(Wafrc)) and the beliefs of the variables fSngN1are initialized topS. The initial outgoing messages from all the AND-OR factors are set to 0. Since each factoris only processed once, this allows implementing MPMP without ever having to store messagesand only requiring to store beliefs. After processing the first minibatch using MPMP (with nodamping), we call the resulting belief over each of the weights b(0)post(Wafrc)(as it standard for MPMPof binary variables, beliefs are represented using max-marginal differences in log space). Insteadof processing the second minibatch using b(0)post(Wafrc)as the initial belief, we use b(1)prior(Wafrc) =b(0)post(Wafrc) + (1)b(0)prior(Wafrc), i.e., we “forget” part of the observed evidence, substituting itwith the prior. This introduces an exponential weighing in the contribution of each minibatch. Theforgetting factor is 2(0;1]specifies the amount of forgetting. When = 1this reduces to normalMPMP (no forgetting), when = 0, we completely forget the previous minibatch and process thenew one from scratch.Fig. 6 illustrates online learning. HCN is shown 30 small images containing 5 randomly chosen andrandomly placed characters with 3% flipping noise (see Fig. 5.(a) and (b) for two examples). Theyare learned in different manners. Fig. 5.(c): as a single batch with damping = 0:8and using 100epochs (each factor is updated 100 times); Fig. 6.(d): with minibatches of 5 images, no damping,= 0:95and using 100 epochs; Fig. 6.(e): with minibatches of 5 images, no damping, = 0:95,using a single epoch, but using 3000 images, so that running time is the same.4.3 M ULTI -LAYER HCN: SYNTHETIC DATAWe create a dataset by combining two traits: a) either a square (with four holes) or a circle and b)either a forward or a backward diagonal line. This results in four patterns, which we group in twocategories, see Fig. 7.(a). Categories are chosen such that we cannot decide the label of an imagebased only on one of the traits. The position of the traits is jittered within a 33window, and aftercombining them, the position of the individual pixels is also jittered by the same amount. Finally,each pixel is flipped with probability 103. This sampling procedure corresponds a 2-layer HCNsampling for some parameterization. We generate 100 training samples and 10000 test samples.4.3.1 U NSUPERVISED LEARNINGWe train the HCN as described in Section C on the 100 training data samples, not using any labelinformation. We do set the architecture of the network to match the architecture generating the data.There are four hyperparameters in this model, p01;p10;p1W;p2W. Their selection is not critical. We11Under review as a conference paper at ICLR 2017will choose them to match the generation process. MAP inference does discover and disentanglealmost perfectly the compositional parts at the first and second layers of the hierarchy, see Figs. 7.(b)and 8.(a). In 8.(a), rows correspond with templates and columns correspond to each of the featuresof the first layer. We can see that the model has “understood” the data and can be used to generatemore samples from it. Performing inference on this model is very challenging. We are not aware ofany previous method that can learn the features of this simple dataset with so few samples. In otherexperiments we verified that, using local message passing as opposed to gradient descent was criticalto successfully minimize our objective function. Results with the quality of Figs. 7.(b) and 8.(a) wereobtained in every run of the algorithm. Running time is 7 min on a single CPU.We can now clamp the discovered weights on both layers and use the fast forward pass to classifyeach training image as belonging to one of the four discovered templates (i.e., cluster them). Wecan even classify the test images as belonging to one of the four templates. When doing this, all theimages in the training set get assigned to the right template and only 60 out of 10000 images in thetest set do not get classified in the right cluster. This means that if we had just 4 labeled images, onefrom each cluster, we could perform 4-class minimally-supervised classification with just 0.6% error.Finally, we run a single forward-backward pass of the inference algorithm on a test image withmissing pixels. We show the inferred missing pixels in Fig. 7.(c). See also footnote 10.4.3.2 S UPERVISED LEARNINGNow we retrain the model using label information. This results in the same weights being found, butthis time the templates are properly grouped in two classes, as shown in Fig. 8.(a). Classification erroron the test set is very low, 0.07%. We now compare the HCN classification performance with that ofa CNN with the same functional form but trained discriminatively and with a standard CNN withReLU activations, a densely connected layer and softmax activation. We minimize the crossentropyloss function with L2regularization on the weights. The test errors are respectively 0.5% and 2.5%,much larger than those of HCN. We then consider versions of our training set with different levels ofpixel-flipping noise. The evolution of the test error is shown in Fig. 8.(c). For the competing methodswe needed many random restarts to obtain good results. Their regularization parameter was chosenbased on the test set performance.4.4 M ULTI -LAYER HCN: MNIST DATAWe turn now to a problem with real data, the MNIST database (LeCun et al., 1998), which contains60000/10000 training/testing images of size 2828. We want to generalize from very few samples,so we only use the first 40 digits of each category to train. We pre-process each image with afixed set of 16 oriented filters, so that the inputs are a 16-channel image. We use a 2-layer HCNwith 32 templates per class and 64 lower level features of size 2626and two layers of 33pooling,p1W= 0:001;p2W= 0:05. These values are set a priori, not optimized. Then we test onboth the regular MNIST training set and different corrupted versions12of it (same preprocessing12See Appendix E for examples of each corruption type.(a) 16 training samples and labels (b)W1, no supervision (c) Missing value imputationFigure 7: Samples from synthetic data and results from unsupervised learning tasks.12Under review as a conference paper at ICLR 2017(a) Supervised, unsupervised(top, bottom) W2(b)W1, discriminative training10-310-210-1Noise level in the input image0.000.050.100.150.200.250.300.350.400.45Test errorGenerative HCNDiscriminative HCNCNN (c) Effect of increased noise levelFigure 8: Discriminative vs. generative training and supervised vs. unsupervised generative training.(a) LearnedW1by HCN (b) LearnedW2by HCNCorruption HCN CNNNone 11.15% 9.53%Noise 20.69% 39.28%Border 16.97% 17.78%Patches 14.52% 16.27%Grid 68.52% 82.69%Line clutter 37.22% 55.77%Deletion 22.03% 25.05%(c) Test error with different cor-ruptionsFigure 9: First layer of weights learned by HCN and CNN on the preprocessed MNIST dataset.and no retraining). We follow the same preprocessing and procedure using a regular CNN withdiscriminative training and explore different regularizations, architectures and activation types, onlyfixing the pooling sizes and number of layers to match the HCN. We select the parameterization thatminimizes the error on the clean test set. This CNN uses 96 low level features. Results for all testsets are reported on Fig. 9.(c). It can be seen that HCN generalizes better. The weights of the firstlayer of the HCN after training are shown in Fig. 9.(a). Notice how HCN is able to discover reusableparts of digits.The training time of HCN scales exactly as that of a CNN. It is linear in each of its architecturalparameters: Number of images, number of pixels per image, features at each layer, size of thosefeatures, etc. However, the forward and backward passes of an HCN are more complex and optimizedcode for them is not readily available as it is for a CNN, so a significant constant factor separatesthe running times of both. Training time for MNIST is around 17 hours on a single CPU. The RAMrequired to store all the messages for 400 training images in MNIST goes up to around 150GB. Toscale to bigger training sets, an online extension (see Section 4.2) needs to be used.5 C ONCLUSIONS AND FUTURE WORKWe have described the HCN, a hierarchical feature model with a rich prior and provided a novelmethod to solve the challenging learning problem it poses. The model effectively learns convolutionalfeatures and is interpretable and flexible. The learned weights are binary, which is advantageous forstorage and computation purposes (Courbariaux et al., 2015; Han et al., 2015). Future work entailsadding more structure to the prior, leveraging more refined MAP inference techniques, exploringother update schedules and further exploiting the generalization-without-retraining capabilities ofthis model.13Under review as a conference paper at ICLR 2017REFERENCESMatthew James Beal. Variational algorithms for approximate Bayesian inference . University ofLondon London, 2003.Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neuralnetworks with binary weights during propagations. In Advances in Neural Information ProcessingSystems , pp. 3105–3113, 2015.Sanja Fidler, Marko Boben, and Ales Leonardis. Learning a hierarchical compositional shapevocabulary for multi-class object representation. arXiv preprint arXiv:1408.5516 , 2014.Amir Globerson and Tommi S Jaakkola. Fixing max-product: Convergent message passing algorithmsfor MAP LP-relaxations. In Advances in Neural Information Processing Systems , pp. 553–560,2008.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems , pp. 2672–2680, 2014.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxoutnetworks. arXiv preprint arXiv:1302.4389 , 2013.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015.Tom Heskes. Stable fixed points of loopy belief propagation are local minima of the bethe free energy.InAdvances in neural information processing systems , pp. 343–350, 2002.Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.Tommi S Jaakkola and Michael I Jordan. Variational probabilistic inference and the qmr-dt network.Journal of artificial intelligence research , 10:291–322, 1999.Ya Jin and Stuart Geman. Context and hierarchy in a probabilistic image model. In 2006 IEEEComputer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) , volume 2,pp. 2145–2152. IEEE, 2006.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques . MITpress, 2009.Vladimir Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PatternAnalysis and Machine Intelligence, IEEE Transactions on , 28(10):1568–1583, 2006.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factoriza-tion. Nature , 401(6755):788–791, 1999.Talya Meltzer, Amir Globerson, and Yair Weiss. Convergent message passing algorithms - a unifyingview. In Jeff A. Bilmes and Andrew Y . Ng (eds.), UAI, pp. 393–401, 2009.Pauli Miettinen, Taneli Mielik ̈ainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discretebasis problem. In European Conference on Principles of Data Mining and Knowledge Discovery ,pp. 335–346. Springer, 2006.Tom Minka et al. Divergence measures and message passing. Technical report, 2005.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.14Under review as a conference paper at ICLR 2017Ankit B Patel, Tan Nguyen, and Richard G Baraniuk. A probabilistic theory of deep learning. arXivpreprint arXiv:1504.00641 , 2015.Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference . 1988.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In ComputerVision Workshops (ICCV Workshops), 2011 IEEE International Conference on , pp. 689–690. IEEE,2011.Siamak Ravanbakhsh, Barnab ́as P ́oczos, and Russell Greiner. Boolean matrix factorization and noisycompletion via message passing. 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1, pp.3, 2009.Shimony. Finding MAPs for belief networks is NP-hard. AIJ: Artificial Intelligence , 68, 1994.Zhangzhang Si and Song-Chun Zhu. Learning and-or templates for object recognition and detection.IEEE transactions on pattern analysis and machine intelligence , 35(9):2189–2205, 2013.Tom ́aˇsˇSingliar and Milo ˇs Hauskrecht. Noisy-or component analysis and its application to linkanalysis. Journal of Machine Learning Research , 7(Oct):2189–2213, 2006.Larry J Stockmeyer. The set basis problem is NP-complete . IBM Thomas J. Watson ResearchDivision, 1975.Huayan Wang and Koller Daphne. Subproblem-tree calibration: A unified approach to max-productmessage passing. In Proceedings of the 30th International Conference on Machine Learning(ICML-13) , pp. 190–198, 2013.Tom ́aˇs Werner. A linear programming approach to max-sum problem: A review. IEEE Trans. PatternAnalysis and Machine Intelligence , 29(7):1165–1179, July 2007.Christopher KI Williams and Nicholas J Adams. Dts: dynamic trees. Advances in neural informationprocessing systems , pp. 634–640, 1999.Ying Nian Wu, Zhangzhang Si, Haifeng Gong, and Song-Chun Zhu. Learning active basis model forobject detection and recognition. International journal of computer vision , 90(2):198–235, 2010.Long Zhu, Yuanhao Chen, Yifei Lu, Chenxi Lin, and Alan Yuille. Max margin and/or graph learningfor parsing the human body. In Computer Vision and Pattern Recognition, 2008. CVPR 2008.IEEE Conference on , pp. 1–8. IEEE, 2008.Long Zhu, Yuanhao Chen, Alan Yuille, and William Freeman. Latent hierarchical structural learningfor object detection. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conferenceon, pp. 1062–1069. IEEE, 2010.15Under review as a conference paper at ICLR 2017A R ELATED WORKThere is a plethora of previous works that address hierarchical feature learning, usually in the settingor real-valued images, as opposed to binary ones: Fidler et al. (2014); Zhu et al. (2008; 2010);Wu et al. (2010); Si & Zhu (2013); Poon & Domingos (2011). Many of those works explicitlyuse AND-OR graphs, in the same spirit as our work. The most outstanding difference, however,between previous works and HCN is that HCN allows multiple features to overlap, thus creatingnew compositions. For instance, if feature H is a centered horizontal line and feature V is a centeredvertical line, HCN can create a new feature “cross” that combines both, and the fact that both areoverlapping and sharing a common active pixel (and many common inactive pixels) is properlyhandled. In contrast, previously cited models cannot overlap features, so they partition the input spaceand dedicate separate subtrees to each of them, and do so recursively. We can see in Figure 5, top row,how we can generate 25 different cross variations using only two features. This would not be possiblewith any of the cited models, which would need to span each combination as a separate feature. Thisfundamental difference makes HCN combinatorially more powerful, but also less tractable. Bothlearning and inference become harder because feature reuse introduces the well-known “explainingaway” phenomenon (Hinton et al., 2006).As a side, note the difference between the meaning of “OR” as used in the present work and inprevious works on AND-OR graphs: what they call “OR”, is what we term POOL (an exclusivebottom-up OR of elements), whereas HCN has a novel third type of gate, the “OR” connection (anon-exclusive, top-down OR of elements) to be able to handle explaining away. Standard AND-OR(or more clearly, AND-POOL) graphs lack the top-down ORing and therefore are not able to handleexplaining away.In the compositional hierarchies of Fidler et al. (2014), the lack of feature reuse allows for inferenceto be exact, since the graphical model is tree-like. Features are learned using a heuristic that relieson the exact inference, similar in spirit to EM. The AND-OR template learning methods of (Zhuet al., 2008; 2010) use respectively max-margin and incremental concave-convex procedures tooptimize a discriminative score. Therefore they require supervision (unlike HCN) and a tractableinference procedure (to make the discriminative score easy to optimize), which again is achievedby not allowing overlapping features. The sum-product networks (SPNs) of (Poon & Domingos,2011) express features as product nodes. In order to achieve feature overlapping, two product nodesspanning the same set of pixels (but with possibly different activation patterns) should be activesimultaneously. This would violate the consistency requirement of SPNs, making HCN a morecompact way to express feature overlap13(with the price to be paid being lack of exact inference).The AND-OR template (AOT) learning of (Wu et al., 2010) again cannot deal properly with thegeneration of superimposed features, having to create new features to handle every combination. InSection B we will compare AOT feature learning and HCN feature learning and check how theselimitations make AOT unable to disentangle the generative features.Grammars exclude the sharing of sub-parts among multiple objects or object parts in a parse of thescene (Jin & Geman, 2006), and they limit interpretations to single trees even if those are dynamic(Williams & Adams, 1999). Our graphical model formulation makes the sharing of lower-levelfeatures explicit by using local conditional probability distributions for multi-parent interactions, andallows for MAP configurations (i.e, the parse graphs) that are not trees.The deep rendering model (DRM) of Patel et al. (2015) is, to some extent, a continuous counterpartof the present work. Although DRMs allow for feature overlap, the semantics are different: in HCNthe amount of activation of a given pixel is the same whether there are one or many features (causes)activating it, whereas in DRM the activation is proportional to the number of causes. This means thatthe difference between DRM and HCN is analogous to the difference between principal componentanalysis and binary matrix factorization: while the first can be solved analytically, the second is hardand not analytically tractable. This results in DRM being more tractable, but less appropriate tohandle problems with binary events with multiple causes, such as the ones posed in this paper.Two popular approaches to handle learning in generative models, largely independent of the modelitself, are variational autoencoders (V AEs) and generative adversarial networks (GANs). We are not13An exponentially big SPN could indeed encode an HCN.16Under review as a conference paper at ICLR 2017(a) Filter bank (b) Training samples (c) HCN features (d) Features from Wuet al. (2010)Figure 10: Results of training a modified HCN on a grayscale image. A filter bank is convolved withthe input image to provide the bottom up messages to each channel of HCN. The filter bank sizes inthis simple example are adapted to match those of generation. As a benchmark, Wu et al. (2010) isused on the same data and is also given knowledge of the filter bank in use. Top row: 33filter size.Bottom row: 77filter size.aware of any work that uses a V AE or GAN with a generative model like HCN and such an option isunlikely to be straightforward.Most common V AEs rely on the reparameterization trick for variance reduction. However, thistrick cannot be applied to HCN due to the discrete nature of its variables, and alternative methodswould suffer from high variance. Another limitation of V AEs wrt HCN is that they perform a singlebottom-up pass and lack of explaining away: HCN combines top-down and bottom-up information inmultiple passes, isolating the parent cause of a given activation, instead of activating every possiblecause.GANs need to compute rWD(GW("))whereD()is the discriminative network and GW(")is agenerative network parameterized by the features W. In this case, not only Wis binary, but also thegenerated reconstructions at every layer, so the GAN formulation cannot be applied to HCN as-is.One could in principle relax the binary assumption of features and reconstructions and use the GANparadigm to train a neural network with sigmoidal activations, but it is unclear that the lack of binaryvariables will still produce proper disentangling (the convolutional extension of NOCA also has thisproblem due to the use of non-binary features and produces results that are inferior to HCN).B C OMBINING WITH GRAYSCALE PREPROCESSINGThe HCN is a binary model. However, to process real-valued data, it can be coupled with aninitial grayscale-to-binary preprocessing step to do feature detection. We tested this by generating agrayscale version of our toy data and then computing the bottom-up messages to S0by convolvingthe input image with a filter bank. This is roughly equivalent to replacing the noisy binary channelof HCN with a Gaussian channel. We used 16 preprocessing filters, which means that S0has 16channels. 200 training images (unsupervised) were used. Two filter sizes, 33and77were tested.We also run the AOT feature learning method of Wu et al. (2010) on the same data for comparison.The results of training on 200 training images (unsupervised) is provided in Figure 10. When thelarger filter is used, the diagonal bars are harder to identify so their disentangling is poorer.C M AX-PRODUCT MESSAGE PASSING (MPMP)The HCN model can be expressed both as a directed Bayesian network or as a factor graph usingonly POOL, AND, and OR factors, each involving a small number of local binary variables. Both17Under review as a conference paper at ICLR 2017learning and ulterior classification can be cast as MAP inference in this factor graph. Other tasks,such as filling in unknown image data can also be performed by MAP inference.MAP inference can be performed exactly on factor graphs without loops (trees) in linear time, but itis an NP-hard problem for arbitrary graphs (Shimony, 1994). The factor graph describing our modelis highly structured, but also very loopy.There is large body of works (Wang & Daphne, 2013; Meltzer et al., 2009; Globerson & Jaakkola,2008; Kolmogorov, 2006; Werner, 2007), addressing the problem of MAP inference in loopy factorgraphs. Perhaps the simplest of these methods is the max-product algorithm, a variant of dynamicprogramming proposed in (Pearl, 1988) to find the MAP configuration in trees.The max-product algorithm defines a set of messagesma!i(yi)going from each factor ato each ofits variablesyi. The sum of the messages incoming to a variable (yi) =Pa:yi2yama!i(yi)definesits approximate max-marginal14(yi). The max-product algorithm then proceeds by updating theoutgoing messages from each factor in turn so as to make the approximate max-marginals consistentwith that factor. This algorithm is not guaranteed to converge if there are loops in the graph, and if itdoes, it is not guaranteed to find the MAP configuration. Damping the updates of the factors has beenshown to improve convergence in loopy belief propagation (Heskes, 2002) and was justified as localdivergence minimization in (Minka et al., 2005). Using a damping factor 0<1for max-product,the update rule ismt+1a!i(yi) = (1)mta!i(yi) +maxyaniloga(yi;yani) +Xyj2yanimta!j(yj) + (3)and the original update rule is recovered for = 1. The valueis arbitrary and does not affect thealgorithm. We select it to make mt+1a!i(yi= 0) = 0 , so that messages can be stored as a single scalar.When storing messages in this way, their sum provides the max-marginal difference, which is enoughfor our purposes.Eq.(3)can be computed exactly for the three type of factors appearing in our graph, so messageupdating can be performed in closed form. Despite the graph of our model being very loopy, itturns out that a careful choice of message initialization, damping and parallel and sequential updatesproduces satisfactory results in our experiments. For further details about max-product inference andMAP inference via message passing in discrete graphical models we refer the reader to (Koller &Friedman, 2009).D M AX-PRODUCT MESSAGE UPDATES FOR AND, OR AND POOL FACTORSIn the following we provide the message update equations for the different types of factors used in themain paper. The messages are in normalized form: each message is a single scalar and correspondsto the difference between the unnormalized message value evaluated at 1 and the unnormalizedmessage value evaluated at 0. For each update we assume that the incoming messages mIN()for allthe variables of the factor are available. The incoming messages are the sum of all messages going tothat variable except for the one from the factor under consideration.The outgoing messages are well-defined even for 1 incoming messages, by taking the correspond-ing limit in the expressions below.D.1 AND FACTORBottom-up messagesmOUT(t1) = max(0;mIN(t2) +mIN(b))max(0;mIN(t2))mOUT(t2) = max(0;mIN(t1) +mIN(b))max(0;mIN(t1))Top-down messagemOUT(b) = min( mIN(t1) +mIN(t2);mIN(t1);mIN(t2))14The max-marginal of a variable in a factor graph gives the maximum value attainable in that factor graphfor each value of that variable.18Under review as a conference paper at ICLR 2017t1t2bAND(a) AND factorPOOLb1b2bMt (b) POOL factort1t2bORtM (c) OR factorFigure 11: Factors and variable labeling used in the message update equations.D.2 POOL FACTORBottom-up messagemOUT(t) = max( mIN(b1);:::; mIN(bM))logMTop-down messagesmOUT(bm) = min( mIN(t)logM;maxj6=mmIN(bj))D.3 OR FACTORBottom-up messagesmOUT(tm) = min( mIN(b) +Xj6=mmax(0;mIN(tj));max(0;mIN(ti))mIN(ti))withi= argmaxi6=mmIN(ti)Top-down messagemOUT(b) =mIN(ti) +Xj6=imax(0;mIN(tj))withi= argmaxmmIN(tm)19Under review as a conference paper at ICLR 2017E I MAGE CORRUPTION TYPE ILLUSTRATIONThe different types of image corruption used in Section 4.4 are shown in the following Figure:Figure 12: Different types of noise corruption used in Section 4.4.20
HyRYDfeEl
HJeqWztlg
ICLR.cc/2017/conference/-/paper78/official/review
{"title": "This paper tackles a very interesting topic. However, it makes a false claim and a discussion/comparison to existing work is necessary. Experiments on real images will also strengthen the current submission", "rating": "5: Marginally below acceptance threshold", "review": "This paper presents an approach to learn object representations by composing a set of templates which are leaned from binary images. \nIn particular, a hierarchical model is learned by combining AND, OR and POOL operations. Learning is performed by using approximated inference with MAX-product BP follow by a heuristic to threshold activations to be binary. \n\nLearning hierarchical representations that are interpretable is a very interesting topic, and this paper brings some good intuitions in light of modern convolutional neural nets. \n\nI have however, some concerns about the paper:\n\n1) the paper fails to cite and discuss relevant literature and claims to be the first one that is able to learn interpretable parts. \nI would like to see a discussion of the proposed approach compared to a variety of papers e.g.,:\n\n- Compositional hierarchies of Sanja Fidler\n- AND-OR graphs used by Leo Zhu and Alan Yuille to model objects\n- AND-OR templates of Song-Chun Zhu's group at UCLA \n\nThe claim that this paper is the first to discover such parts should be removed. \n\n2) The experimental evaluation is limited to very toy datasets. The papers I mentioned have been applied to real images (e.g., by using contours to binarize the images). \nI'll also like to see how good/bad the proposed approach is for classification in more well known benchmarks. \nA comparison to other generative models such as VAE, GANS, etc will also be useful.\n\n3) I'll also like to see a discussion of the relation/differences/advantages of the proposed approach wrt to sum product networks and grammars.\n\nOther comments:\n\n- the paper claims that after learning inference is feed-forward, but since message passing is used, it should be a recurrent network. \n\n- the algorithm and tech discussion should be moved from the appendix to the main paper\n\n- the introduction claims that compression is a prove for understanding. I disagree with this statement, and should be removed. \n\n- I'll also like to see a discussion relating the proposed approach to the Deep Rendering model. \n\n- It is not obvious how some of the constraints are satisfied during message passing. Also constraints are well known to be difficult to optimize with max product. How do you handle this?\n\n- The learning and inference algorithms seems to be very heuristic (e.g., clipping to 1, heuristics on which messages are run). Could you analyze the choices you make?\n\n- doing multiple steps of 5) 2) is not a single backward pass \n\nI'll reconsider my score in light of the answers", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Hierarchical compositional feature learning
["Miguel Lazaro-Gredilla", "Yi Liu", "D. Scott Phoenix", "Dileep George"]
We introduce the hierarchical compositional network (HCN), a directed generative model able to discover and disentangle, without supervision, the building blocks of a set of binary images. The building blocks are binary features defined hierarchically as a composition of some of the features in the layer immediately below, arranged in a particular manner. At a high level, HCN is similar to a sigmoid belief network with pooling. Inference and learning in HCN are very challenging and existing variational approximations do not work satisfactorily. A main contribution of this work is to show that both can be addressed using max-product message passing (MPMP) with a particular schedule (no EM required). Also, using MPMP as an inference engine for HCN makes new tasks simple: adding supervision information, classifying images, or performing inpainting all correspond to clamping some variables of the model to their known values and running MPMP on the rest. When used for classification, fast inference with HCN has exactly the same functional form as a convolutional neural network (CNN) with linear activations and binary weights. However, HCN’s features are qualitatively very different.
["Unsupervised Learning"]
https://openreview.net/forum?id=HJeqWztlg
https://openreview.net/pdf?id=HJeqWztlg
https://openreview.net/forum?id=HJeqWztlg&noteId=HyRYDfeEl
Under review as a conference paper at ICLR 2017HIERARCHICAL COMPOSITIONAL FEATURE LEARNINGMiguel L ́azaro-Gredilla, Yi Liu, D. Scott Phoenix, Dileep GeorgeVicariousSan Francisco, CA, USAfmiguel,yiliu,scott,dileep g@vicarious.comABSTRACTWe introduce the hierarchical compositional network (HCN), a directed generativemodel able to discover and disentangle, without supervision, the building blocksof a set of binary images. The building blocks are binary features defined hierar-chically as a composition of some of the features in the layer immediately below,arranged in a particular manner. At a high level, HCN is similar to a sigmoid beliefnetwork with pooling. Inference and learning in HCN are very challenging andexisting variational approximations do not work satisfactorily. A main contributionof this work is to show that both can be addressed using max-product messagepassing (MPMP) with a particular schedule (no EM required). Also, using MPMPas an inference engine for HCN makes new tasks simple: adding supervision infor-mation, classifying images, or performing inpainting all correspond to clampingsome variables of the model to their known values and running MPMP on therest. When used for classification, fast inference with HCN has exactly the samefunctional form as a convolutional neural network (CNN) with linear activationsand binary weights. However, HCN’s features are qualitatively very different.1 I NTRODUCTIONDeep neural networks coupled with the availability of vast amounts of data have proved verysuccessful over the last few years at visual discrimination (Goodfellow et al., 2014; Kingma &Welling, 2013; LeCun et al., 1998; Mnih & Gregor, 2014). A basic desire of deep architectures is todiscover the blocks –or features– that compose an image (or in general, a sensory input) at differentlevels of abstraction. Tasks that require some degree of image understanding can be performed moreeasily when using representations based on these building blocks.It would make intuitive sense that if we were to train one of the above models (particularly, thosethat are generative, such as variational autoencoders or generative adversarial networks) on imagescontaining, e.g. text, the learned features would be individual letters, since those are the buildingblocks of the provided images. In addition to matching our intuition, a model that realizes (from noisyraw pixels) that the building blocks of text are letters, and is able to extract a representation basedon those, has found meaningful structure in the data, and can prove it by being able to efficientlycompress text images. Figure 1: Features extracted by HCN. Left: from multiple images. Right: from a single image.1Under review as a conference paper at ICLR 2017However, this is not the case with existing incarnations of the above models1. We can see in Fig. 1the features recovered by the hierarchical compositional network (HCN) from a single image with nosupervision. They appear to be reasonable building blocks and are easy to find for a human. Yet weare not aware of any model that can perform such apparently simple recovery with no supervision.The HCN is a multilayer generative model with features defined at each layer. A feature (at a givenposition) is defined as the composition of features of the layer immediately below (by specifying theirrelative positions). To increase flexibility, the positions of the composing features can be perturbedslightly with respect to their default values (pooling). This results in a latent variable model, withsome of the latent variables (the features) being shared for all images while others (the pool states)are specific for each image.Comparing HCN with other generative models for images, we note that existing models tend tohave at least one of the following limitations: a) priors are not rich enough; typically, the sources ofvariation are not distributed among the layers of the network, and instead the generative model isexpressed as X=f(Y)+"whereYand"are two set of random variables, Xis the generated imageandf()is the network, i.e., the entire network behaves as a sophisticated deterministic function, b)the inference method (usually a separate recognition network) considers all the latent variables asindependent and does not solve explaining away, which leads to c) the learned features being notdirectly interpretable as reusable parts of the learned images.Although directed models enjoy important advantages such as the ability to represent causal semanticsand easy sampling mechanics, it is known that the “explaining away” phenomenon makes inferencedifficult in these models (Hinton et al., 2006). For this reason, representation learning efforts havelargely focused on undirected models (Salakhutdinov & Hinton, 2009), or have tried to avoid theproblem of explaining away by using complementary priors (Hinton et al., 2006).An important contribution of this work is to show that approximate inference using max-product mes-sage passing (MPMP) can learn features that are composable, interpretable and causally meaningful.It is also noteworthy that unlike previous works, we consider the weights (a.k.a. features) to be latentvariables and not parameters. Thus, we do not use separate expectation-maximization (EM) stages.Instead, we perform feature learning and pool state inference jointly as part of the same messagepassing loop.When augmented with supervision information, HCN can be used for classification, with inferenceand learning still being taken care of by a largely unmodified MPMP procedure. After training,discrimination can be achieved via a fast forward pass which turns out to have the same functionalform as a convolutional neural network (CNN).The rest of the paper is organized as follows: we describe the HCN model in Section 2; Section 3describes learning and inference in the single layer and multilayer HCNs; Section 4 tests the HCNexperimentally and we conclude with a brief discussion in Section 5.2 T HEHIERARCHICAL COMPOSITIONAL NETWORKThe HCN model is a discrete latent variable model that generates binary images by composing partswith different levels of abstraction. These parts are shared across all images. Training the modelinvolves learning such parts from data as well as how to combine them to create each concrete image.The HCN model can be expressed as a factor graph consisting only of three types of factors: AND,OR and POOL. These perform the obvious binary operations and will be defined more preciselylater in this section. The flexibility of the model allows training in supervised, semisupervisedand unsupervised settings, including missing image data. Once trained, the HCN can be used forclassification, missing value completion (pixel inference), sparsification, denoising, etc. See Fig. 2for a factor graph of the complete model. Additional details of each layer type are given in Fig. 4.At a high level, the HCN consists of a class layer at the top followed by alternating convolutionallayers and pooling layers. Inside each layer there is a sparsification , arepresentation andweights1Discriminative models find features that are good for classification, but not for generation (the trainingobjective is not constrained enough). Existing generative models also fail at recovering the building blocks of animage because they either a) mix positive and negative weights (which turns out to be critical for them beingtrainable via backpropagation) or b) lack inference mechanisms able to perform explaining away.2Under review as a conference paper at ICLR 2017Noisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerNoisy channelPooling layerFeature layerPooling layerFeature layerFigure 2: Factor graph of the HCN model when connected to multiple images Xn. The weights arethe only variables that entangle multiple images. The top variables are clamped to 1 and the bottomvariables are clamped to Xn. Additional details of each layer type are given in Fig. 4.(a.k.a. features), each of which is a multidimensional array of latent variables. The class layer selectsa category, and within it, which template is going to be used, producing the top-level sparsification. Asparsification is simply an encoding of the representation. A sparsification encodes a representationby specifying which features compose it and where they should be placed. The features are in turnstored in the form of weights . Convolutional layers deterministically combine the sparsification andthe weights of a layer to create its representation. Pooling layers randomly perturb the position of theactive elements (within a local neighborhood), introducing small variations in the process.2.1 B INARY CONVOLUTIONAL FEATURE LAYER (SINGLE -LAYER HCN)This layer can perform non-trivial feature learning on its own. We refer to it as a single-layer HCN.See Section 4.1 for the corresponding experiments.In this case, since there is no additional top-down structure, a binary image is created by placingfeatures at random locations of an image. Wherever two features overlap, they are ORed, i.e., if apixel of the binary image is activated due to two features, it is simply kept active. We will call Wtothe features, Sto the sparsification of the image (locations at which features are placed in that image)andXto the image. All of these variables are multidimensional binary arrays.The values of each of the involved arrays for a concrete example with a single-channel image is givenin Fig. 3 (to display Swe maximize over f). The corresponding diagram is shown in Fig. 4.In practice, each image Xis possibly multichannel, so it will have size FXHXWX, where thefirst dimension is the number of channels in the image and the other two are its height and width. Shas size FSHSWS, where the first dimension is the number of features and the other two areits height and width. We refer to an entry of SnasSfrc. Setting an entry Sfrc= 1corresponds toplacing feature fat position (r;c)in the final image X. The features themselves are stored in W,which has size FbelowWFWHWWW, where FW=FSandFbelowW =FX. I.e., each feature is a3Under review as a conference paper at ICLR 2017(a) ImageX (b) Sparsification S (c) FeaturesW (d) Reconstruction RFigure 3: Unsupervised analysis of image Xby a standalone convolutional feature layer of HCN.small 3D array containing one of the building blocks of the image. Those are placed in the positionsspecified by S, and the same block can be used many times at different positions, hence calling thislayer convolutional2.We can fully specify a probabilistic model for a binary images by adding independent priors overthe entries of SandWand connecting those to Xthrough a binary convolution and a noisy channel.The complete model isp(S) =Yfrcp(Sfrc) =YfrcpSfrcS(1pS)1Sfrcp(W) =Yafrcp(Wafrc) =YafrcpWafrcW (1pW)1Wafrc(1)p(XjR) =Yarcpnoisy(XarcjRarc)withR= bconv(S;W )andpnoisy(1j0) =p10;pnoisy(0j1) =p01;which depends on four scalar parameters pS;pW;p01;p10, controlling the density of features in theimage, of pixels in each feature, and the noise of the channel, respectively. The indexes a;f;r;c runover channels, features, rows and columns, respectively.We have used the binary convolution operator R= bconv(S;W ). A binary convolution performsthe same operation as a normal convolution, but operates on binary inputs and truncates outputsabove 1. Our latent variables are arranged as three- and four-dimensional arrays, so we defineR= bconv(S;W )to meanRa;:;:= min(1;Pfconv2D(Sf;:;:;Wa;f;:;:))where conv2D(;)is theusual 2D convolution operator, RandSare binary 3D arrays and Wis a binary 4D arrays. Theoperator min(1;)truncates values above 1 to 1, performing the ORing of two overlapping featurespreviously mentioned.The binary convolution (and hence model (1)) can be expressed as a factor graph, as seen in Fig. 4.The AND factor can be written as AND (bjt1;t2)and takes value 0 when the bottom variable bis thelogical AND of the two top variables t1andt2. It takes value1 in any other case. The OR factor,OR(bjt1:::;t M)takes value 0 when the bottom variable bis the logical OR of the Mtop variablest1:::;t M. It takes value1 in any other case.When this layer is not used in standalone mode, but inside a multilayer HCN, the variables Rareconnected to the pooling layer immediately below (instead of being connected to the image Xthroughthe noisy channel) and the variables Sare connected to the pooling layer immediately above (insteadof being connected to the prior).2.2 T HE CLASS LAYERWe assume for now that a single class is present in each image. We can then writelogp(c1;:::;c K) =POOL (c1;:::;c Kj1)whereckare mutually exclusive binary variables representing which of the Kcategories is present.In general, we define POOL (b1;:::;b Mjt= 1) =logMwhen exactly one of the bottom variablesb1;:::;b mtakes value 1 (we say that the pool is active), and POOL (b1;:::;b Mjt= 0) = 0 whenbm= 08m(the pool is off). It takes value 1 in any other case.2Additionally, the convolution implies the relations H X=HW+HS1and W X=WW+WS14Under review as a conference paper at ICLR 2017ABAAB(a) Binary convolutionR4R1s3R2R3s2s1w2ORw1AND (b) Feature layerR1R2R3s1s2s3POOLUOR (c) Pooling layerFigure 4: Diagrams of binary convolution and factor graph connectivity for 1D image.Within each category, we might have multiple templates. Each template corresponds to a differentvisual expression of the same conceptual category. For instance, if one category is furniture, wecould have a template for chair and another template for table. Each category has binary variablesrepresenting each of the Jtemplates,sjkwithj2[1:::J]. If a category is active, exactly one of itstemplates will be active. The joint probability of the templates is thenlogp(SLjc1;:::;c K) =Xklogp(s1k;:::;s Jkjck) =XkPOOL (s1k;:::;s Jkjck)where these JKvariables are arranged as a 3D array of size 11JKcalledSLwhich formsthe top-level sparsification of the template. A sample from SLwill always have exactly one elementset to 1 and the rest set to 0. Superscript Lis used to identify the layer to which a variable belongs.Since there are Llayers,SLis the top layer sparsification.2.3 T HE POOLING LAYERIn a multilayer HCN, feature layers and pooling layers appear in pairs. Inside layer `, the poolinglayer`is placed below the feature layer `.Since the convolutional feature layer is deterministic, any variation in the generated image mustcome from the pooling layers (and the final noisy channel). Each pooling layer shifts the positionof the active units in R`to produce the sparsification S`1in the layer below. This shifting is local,constrained to a region of size3HPWP1, the pooling window. When two or more active unitsinR`are shifted towards the same position in S`1, they result in a single activation, so the numberof active units in S`1is equal or smaller than the number of activations in R`.The above description should be enough to know how to sample S`1fromR`, but to provide arigorous probabilistic description, we need to introduce the intermediate binary variables Ur;c;f;r;c; ,which are associated to a shift r;cof the element R`frc. The HPWPintermediate variablesassociated to the same element R`frcare noted as U`:;:;frc. Since an element can be shifted to a singleposition per realization and only when it is active, the elements in U`:;:;frcare grouped into a poollogp(U`jR`) =Xfrclogp(U`:;:;frcjR`frc) =XfrcPOOL (U`:;:;frcjR`frc)and thenS`1can be obtained deterministically from U`by ORing the HPWPvari-ables ofUthat can potentially turn it on, logp(S`1jU`) =Pfr0c0logp(S`1fr0c0jU`) =Pfr0c0OR(S`1fr0c0jfUr;c;f;r;cgr0:r+r;c0:c+c):i.e., the above expression evaluates to 0 if theabove OR relations are satisfied and to 1 if they are not.3The described pooling window only allows for spatial perturbations, i.e., translational pooling. A moregeneral pooling layer would also pool in the third dimension (Goodfellow et al., 2013), across features, whichwould introduce richer variation and also impose a meaningful order in the feature indices. Though we donot pursue that option in this work, we note that this type of pooling is required for a rich hierarchical visualmodel. In fact, the pooling over templates that we special-cased in the description of the class layer would fit asa particular case of this third-dimension pooling.5Under review as a conference paper at ICLR 20172.4 J OINT PROBABILITY WITH MULTIPLE IMAGESThe observed binary image Xcorresponds to the bottommost sparsification4S0after it has traversed,element by element, a noisy channel with bit flip probabilities p(Xfrc= 1jS0frc= 0) =p10<0:5andp(Xfrc= 0jS0frc= 1) =p01<0:5. This defines p(XjS0).Finally, if we consider the weight variables to be independent Bernoulli variables with a fixed per-layer sparse prior p`Wthat are drawn once and shared for the generation of all images, we can writethe joint probability of multiple images, latent variables and weights aslogp(fXn;Hn;CngNn=1;fW`gL`=1) =LX`=1logp(W`) +NXn=1logp(XnjS0n) + logp(SLnjCn) + logp(Cn)+NXn=1LX`=1logp(S`1njU`n) + logp(U`njR`n) + logp(R`njS`n;W`)where we have collected all the category variables fckgof each image in Cnand the remaining latentvariables in Hnand for convenience. Each image uses its own copy of the latent variables, but theweights are shared across all images, which is the only coupling between the latent variables.The above expression shows how, in addition to factorizing over observations (conditionally on theweights), there is a factorization across layers. Furthermore, the previous description of each of theselayers implies that the entire model can be further reduced to small factors of type AND, OR andPOOL, involving only a few local variables each.Since we are interested in a point estimate of the features, given the images fXngNn=1and a (possiblyempty)5subset of the labels fCngNn=1, we will attempt to recover the maximum a posteriori6(MAP)configuration over features, sparsifications, and unknown labels. Note that for classification, selectingfW`gL`=1by maximizing the joint probability is very different from selecting it by maximizing adiscriminative loss of the type logp(fCngNn=1jfXngNn=1;fW`gL`=1), since in this case, all the priorinformation p(X)about the structure of the images is lost. This results in more samples beingrequired to achieve the same performance, and less invariance to new test data.Once learning is complete, we can fix fW`gL`=1, thus decoupling the model for every image, and useapproximate MAP inference to classify new test images, or to complete them if they include missingdata (while benefiting from the class label if it is available).Even though we only consider the single-class-per-image setting, the compositional property of thismodel means that we can train it on single-class images and then, without retraining, change the classlayer to make it generate (and therefore, recognize) combinations of classes in the same image.3 L EARNING AND INFERENCEWe will consider first the simpler case of a single-layer HCN, as described in Section 2.1. Then wewill tackle inference in the multilayer HCN.3.1 L EARNING IN SINGLE -LAYER HCNIn this case, for model (1), we want to findS;W= arg maxS;Wp(XjS;W )p(S)p(W): (2)This is a challenging problem even in simple cases. In fact, it can be easily shown that boolean matrixfactorization (BMF), a.k.a. boolean factor analysis, arises as a particular case of (2)in which the4Alternatively, one could introduce the noisy channel between R0andX, but that would be equivalent to ourformulation using a pooling window of size 111at the bottommost layer.5The model was described as unsupervised, but the class is represented in latent variable Cn, which can beclamped to its observed value, if it is available.6Note that we are performing MAP inference over discrete variables, where concerns about the arbitrarinessof MAP estimators (see e.g., (Beal, 2003) Chapter 1.3) do not apply.6Under review as a conference paper at ICLR 2017heights and widths of all the involved arrays are set to one. BMF is a decades-old problem proved tobe NP-complete in (Stockmeyer, 1975) and with applications in machine learning, communicationsand combinatorial optimization. Another related problem is non-negative matrix factorization (NMF)(Lee & Seung, 1999), but NMF is additive instead of ORing the contributions of multiple features,which is not desired here.One of the best-known heuristics to address BMF is the Asso (Miettinen et al., 2006). Unfortunately,it is not clear how to extend it to solve (2)because it relies on assumptions that no longer hold inthe present case. The variational bound of (Jaakkola & Jordan, 1999) addresses inference in thepresence of a noisy-OR gate and was successfully used in by ( ˇSingliar & Hauskrecht, 2006) to obtainthe noisy-OR component analysis (NOCA) algorithm. NOCA addresses a very similar problem to(2), the two differences being that a) the weight values are continuous between 0 and 1 (instead ofbinary) and b) there is no convolutional weight sharing among the features. NOCA can be modifiedto include the convolutional weight sharing, but it is not an entirely satisfactory solution to the featurelearning problem as we will show. We observed that the obtained local maxima, even after significanttweaking of parameters and learning schedule, are poor for problems of small to moderate size.We are not aware of other existing algorithms that can solve (2)for medium image sizes. The model(1)is directly amenable to mean-field inference without requiring the additional lower-bounding usedin NOCA, but we experimented with several optimization strategies (both based in mean field updatesand gradient-based) and the obtained local maxima were consistently worse than those of NOCA.In (Ravanbakhsh et al., 2015) it is shown that max-product message passing (MPMP) produces state-of-the-art results for the BMF problem, improving even on the performance of the Asso heuristic.We also address problem (2)using MPMP. Even though MPMP is not guaranteed to converge, wefound that with the right schedule, even with very slight or no damping, good solutions are foundconsistently.Model (1)can be expressed both as a directed Bayesian network or as a factor graph using onlyAND and OR factors, each involving a small number of local binary variables. Finding features andsparsifications can be cast as MAP inference7in this factor graph.MPMP is a local message passing technique to perform MAP inference in factor graphs. MPMP isexact on factor graphs without loops (trees). In loopy models, such as (1), it is an approximation withno convergence guarantees8, although convergence can be often attained by using some damping0<1. See Appendix C for a quick review on MPMP and Appendix D for the message updateequations required for the factors used in this work. Unlike Ravanbakhsh et al. (2015) which usesparallel updates and damping, we update each AND-OR factor9in turn, following a random in asequential schedule. This results in faster convergence with less or no damping.3.2 L EARNING IN MULTILAYER HCN ( UNSUPERVISED ,SEMISUPERVISED ,SUPERVISED )Despite its loopiness, we can also apply MPMP inference to the full, multilayer model and obtaingood results. The learning procedure iterates forward and backward passes (a precise description canbe found in Algorithm 1 below). In a forward pass, we proceed updating the bottom-up messages tovariables, starting from the bottom of the hierarchy (closer to the image) and going up to the classlayer. In a backward pass, we update the top-down messages visiting the variables in top-down order.Messages to the weight variables are updated only in the forward pass. We use damping only inthe update of the bottom-up messages from a pooling layer during the forward pass. The AND-ORfactors in the binary convolutional layer form trees, so we treat each of these trees as a single factor,since closed form message updates for them can be obtained. Those factors are updated once inrandom order inside each layer, i.e., sequentially. The pools at the class layer also from a tree, sowe also treat them as a single factor. The message updates for AND, OR and POOL factors followtrivially from their definition and are provided in Appendix D.7Note that we do not marginalize the latent variables (or the weights), but find their MAP configuration givena set of images. The sparse priors on the weights and the sparsification act as regularizers and prevent overfitting.8MPMP works by iterating fixed point equations of the dual of the Bethe free energy in the zero-temperaturelimit. Convexified dual variants (see Appendix C) are guaranteed to converge, but much slower.9Each OR factor is connected to several AND factors which together form a tree. We update the incomingand outgoing messages of the entire tree, since they can be computed exactly.7Under review as a conference paper at ICLR 2017After enough iterations, weights are set to 1 if their max-marginal difference is positive and to 0otherwise. This hard assignment converts some of the AND factors into a pass-through and the restin disconnections. Thus the weight assignments define the connectivity between S`andR`on a newgraph without ANDs. This is the learned model, that we can use to perform inference with with onnew test images.3.3 I NFERENCE IN MULTILAYER HCNTypical inference tasks are classification and missing value imputation. For classification, we findthata single forward pass seems good enough and further forward and backward passes are notneeded (see Algorithm 1 for the description of the forward and backward passes). For missingvalue imputation a single forward and top-down pass is enough. In order to achieve higher qualityexplaining-away10, we use a top-down pass instead of a backward pass. A top-down pass differs froma backward pass in that we replace step 5) with multiple alternating executions of steps 5) and 2).Therefore, it is not strictly a backward pass, but it proceeds top-down in the sense that once a layerhas been fully processed, it is never visited again.Interestingly, the functional form of the forward pass of an HCN is the same as that of a standardCNN, see Section 3.4, and therefore, an actual CNN can be used to perform a fast forward pass.Algorithm 1 Learning in Hierarchical Compositional NetworksInput: Hyperparameters p01;p10;fp`WgL`=1, datafXn;CngNn=1and network structure (pool and weightsizes for each layer)InitInitialize bottom-up messages and messages to fW`gto zero. Initialize the top-down messages to 1.Initialize messages to Wfrom its prior uniformly at random in (0:9pW;pW)to break symmetry. Set constantbottom-up messages to S0:m(S0frc) = (k1k0)Xfrc+k0withk1= log1p01p10andk0= logp011p10repeatForward pass:for`in1;:::;L do1) Update messages from OR to U`in parallel2) Update messages from POOL to R`in parallel with damping 3) Update messages from AND-OR to W`andS`sequentially in random orderend forUpdate message from all class layer POOLs to SL. Hard assign Cnif label is available.Backward pass:for`inL;:::; 1do4) Update messages from AND-OR to R`sequentially in random order5) Update messages from POOL to U`in parallel6) Update messages from OR to S`1in parallelend forCompute max-marginals by summing incoming messages to each variableuntil Fixed point or iteration limitreturn Max-marginal differences of S`,W`andR`3.4 A BOUT THE HCN FORWARD PASS3.4.1 F UNCTIONAL CORRESPONDENCE WITH CNNAfter a single forward pass in an HCN (considering that the weights are known, after training), weget an estimate of the MAP assignment over categories. In practice, this assignment seems goodenough for classification and further forward and backward passes are not needed.The functional form of the first forward pass can be simplified because of the initial strongly negativetop-down messages. Under these conditions, the message update rules applied to the pooling layersof the HCN have exactly the same functional form11as the max-pooling layer in a standard CNN.Similarly, applying the message update rule to the convolutional layers of the HCN —when the10To avoid symmetry problems, instead of making the distribution of each POOL perfectly uniform, wecan introduce slight random perturbations while keeping the highest probability value at the center of the pool.Doing so speeds up learning and favors centered backward pass reconstructions in the case of ties.11See the Appendix D for the update rules of the messages of each type factor.8Under review as a conference paper at ICLR 2017weights are known— has the same functional form as performing a standard (not binary) convolutionof the bottom-up messages with the weights, just like in a standard CNN. At the top, the max-marginalover categories will select the one with the template with the largest bottom-up message. This can berealized with max-pooling over the feature dimension as done in (Goodfellow et al., 2013), or closelyapproximated using a fully connected layer and a softmax, as in more standard CNNs.Simply put, the binary weights learned by an HCN can be copied to a standard CNN with linearactivations and they will both produce the same classification results when we applied to the bottom-upmessages (which are a positive scaling of the input data Xplus a constant).3.4.2 I NVARIANCE TO NOISE LEVELConsider we generate two data sets with the HCN model using the same weights but different bit-flipprobabilities. If those probabilities are known, would we use different classifiers for each dataset? Ifwe use a single forward pass, changing p01andp10produces a different monotonic transformation ofall the bottom-up messages at every layer of the hierarchy, but the selected category, which dependsonly on which variable has the largest value , will not change. So, with a single-pass classifier, ourclass estimation does not change with the noise level. This has the important implication that an HCNdoes not need to be trained with noisy data to classify noisy data. It can be trained with clean data(where there is more signal and learning parts is easier) and used on noisy data without retraining.4 E XPERIMENTSIn the following, we experimentally characterize both the single-layer and multilayer HCN.4.1 S INGLE -LAYER HCNWe create several synthetic (both noisy and noiseless) images in which the building blocks –orfeatures– are obvious to a human observer and check the ability of HCN to recover the them. Thetask is deceptively simple, and the existing the state of the art at this task, NOCA, is unable to solveseveral of our examples. Since the number of free parameters of the model is so small (3 in the caseof a symmetric noisy channel), these can be easily explored using grid search and selected usingmaximum likelihood. The sensitivity of the results to these parameters is small.HCN only requires straightforward MPMP with random order over the factors. For NOCA, initializingthe variational posterior over the latent sources and choosing how to interleave the updates of thisposterior with the update of the additional variational parameters ( ˇSingliar & Hauskrecht, 2006) istricky. For best results, during each E step we repeated the following 10 times: update the variationalparameters for 20 iterations and then update the variational posterior (which is a single closed formupdate). The M update also required an inner loop of variational parameter updating.The performance of HCN and NOCA can be assessed visually in Fig. 5. Column (a) shows eachinput image (these are single-image datasets) and the remaining columns show the features andreconstructions obtained by HCN and NOCA. In some of the input images we have added noisethat flips pixels with 3% probability. For HCN (respectively NOCA), we binarize all the beliefs(respectively, variational posteriors) from the [0;1]range by thresholding at 0.5 and then perform abinary convolution to obtain the reconstruction. Because noise is not included in this reconstruction,a cleaner image may be obtained, resulting in unsupervised denoising (rows 1 and 4 of Fig. 5).For a quantitative comparison, refer to Tab. 1. One algorithm-independent way to measure perfor-mance in the feature learning problem is to measure compression. It is known that to transmit a longsequence of Nbits which are 1 with probability p, we only need to transmit NH(p)bits with anoptimal encoding, where His the entropy. Thus sparse sequences compress well. In order to transmitthese images without loss, we need to transmit either one sequence of bits (encoding the imageitself) or three sequences of bits, one encoding the features, another encoding the sparsification and alast one encoding the errors between the reconstruction and the original image. Ideally, the secondmethod is more efficient, because the features are only sent once and the sparsification and errorssequences are much sparser than the original image. The ratio between the two is shown togetherwith running time on a single CPU. Unused features are discarded prior to computing compression.9Under review as a conference paper at ICLR 2017(a) Input image X (b) HCNW (c) HCNR (d) NOCAW (e) NOCARFigure 5: Features extracted by HCN and NOCA and image reconstructions for several datasets. Bestviewed on screen with zoom.(a) ImageX1 (b) ImageX2 (c) Batch HCN W (d) Online HCN W (e) Online HCN WFigure 6: Online learning. (a) and (b) show two sample input images; (c) and (d) show the featureslearned by batch and online HCN using 30 input images and 100 epochs; (e) shows the featureslearned by online HCN using 3000 input images and 1 epoch.10Under review as a conference paper at ICLR 2017Two bars Symbols Clean letters Noisy letters Textcomp. time comp. time comp. time comp. time comp. timeNOCA 84% 0.67 m 85% 92 m 98% 662 m 102% 716 m 84% 1222 mHCN 83% 0.07 m 11% 0.42 m 38% 25 m 73% 24 m 28% 31 mTable 1: Comp.: E(X)=(E(S)+E(W)+E(XR)), whereEis the encoding cost. Time: minutes.4.2 O NLINE LEARNINGThe above experiments use a batch formulation, i.e., consider simultaneously all the available trainingdatafXngN1. Since the amount of memory required to store the messages for MPMP scales linearlywith the training data, this imposes a practical limit in the number of images that can be processed.In order to overcome this limit, we also consider a particular message update schedule in whichthe messages outgoing from factors connected to each image and sparsification Xn;Snare updatedonly once and therefore, after an image has been processed, can be discarded, since they are neverreused. This effectively allows for online processing of images without memory scaling issues. Twomodifications are needed in practice for this to work well: first, instead of processing only oneimage at a time, better results are obtained if the factors of multiple images (forming a minibatch) areprocessed in random order. Second, a forgetting mechanism must be introduced to avoid accumulatingan unbounded amount of evidence from the processed minibatches.In detail, the beliefs of the variables Ware initialized uniformly at random in the interval (0:9pW;pW)(we call these initial beliefs b(0)prior(Wafrc)) and the beliefs of the variables fSngN1are initialized topS. The initial outgoing messages from all the AND-OR factors are set to 0. Since each factoris only processed once, this allows implementing MPMP without ever having to store messagesand only requiring to store beliefs. After processing the first minibatch using MPMP (with nodamping), we call the resulting belief over each of the weights b(0)post(Wafrc)(as it standard for MPMPof binary variables, beliefs are represented using max-marginal differences in log space). Insteadof processing the second minibatch using b(0)post(Wafrc)as the initial belief, we use b(1)prior(Wafrc) =b(0)post(Wafrc) + (1)b(0)prior(Wafrc), i.e., we “forget” part of the observed evidence, substituting itwith the prior. This introduces an exponential weighing in the contribution of each minibatch. Theforgetting factor is 2(0;1]specifies the amount of forgetting. When = 1this reduces to normalMPMP (no forgetting), when = 0, we completely forget the previous minibatch and process thenew one from scratch.Fig. 6 illustrates online learning. HCN is shown 30 small images containing 5 randomly chosen andrandomly placed characters with 3% flipping noise (see Fig. 5.(a) and (b) for two examples). Theyare learned in different manners. Fig. 5.(c): as a single batch with damping = 0:8and using 100epochs (each factor is updated 100 times); Fig. 6.(d): with minibatches of 5 images, no damping,= 0:95and using 100 epochs; Fig. 6.(e): with minibatches of 5 images, no damping, = 0:95,using a single epoch, but using 3000 images, so that running time is the same.4.3 M ULTI -LAYER HCN: SYNTHETIC DATAWe create a dataset by combining two traits: a) either a square (with four holes) or a circle and b)either a forward or a backward diagonal line. This results in four patterns, which we group in twocategories, see Fig. 7.(a). Categories are chosen such that we cannot decide the label of an imagebased only on one of the traits. The position of the traits is jittered within a 33window, and aftercombining them, the position of the individual pixels is also jittered by the same amount. Finally,each pixel is flipped with probability 103. This sampling procedure corresponds a 2-layer HCNsampling for some parameterization. We generate 100 training samples and 10000 test samples.4.3.1 U NSUPERVISED LEARNINGWe train the HCN as described in Section C on the 100 training data samples, not using any labelinformation. We do set the architecture of the network to match the architecture generating the data.There are four hyperparameters in this model, p01;p10;p1W;p2W. Their selection is not critical. We11Under review as a conference paper at ICLR 2017will choose them to match the generation process. MAP inference does discover and disentanglealmost perfectly the compositional parts at the first and second layers of the hierarchy, see Figs. 7.(b)and 8.(a). In 8.(a), rows correspond with templates and columns correspond to each of the featuresof the first layer. We can see that the model has “understood” the data and can be used to generatemore samples from it. Performing inference on this model is very challenging. We are not aware ofany previous method that can learn the features of this simple dataset with so few samples. In otherexperiments we verified that, using local message passing as opposed to gradient descent was criticalto successfully minimize our objective function. Results with the quality of Figs. 7.(b) and 8.(a) wereobtained in every run of the algorithm. Running time is 7 min on a single CPU.We can now clamp the discovered weights on both layers and use the fast forward pass to classifyeach training image as belonging to one of the four discovered templates (i.e., cluster them). Wecan even classify the test images as belonging to one of the four templates. When doing this, all theimages in the training set get assigned to the right template and only 60 out of 10000 images in thetest set do not get classified in the right cluster. This means that if we had just 4 labeled images, onefrom each cluster, we could perform 4-class minimally-supervised classification with just 0.6% error.Finally, we run a single forward-backward pass of the inference algorithm on a test image withmissing pixels. We show the inferred missing pixels in Fig. 7.(c). See also footnote 10.4.3.2 S UPERVISED LEARNINGNow we retrain the model using label information. This results in the same weights being found, butthis time the templates are properly grouped in two classes, as shown in Fig. 8.(a). Classification erroron the test set is very low, 0.07%. We now compare the HCN classification performance with that ofa CNN with the same functional form but trained discriminatively and with a standard CNN withReLU activations, a densely connected layer and softmax activation. We minimize the crossentropyloss function with L2regularization on the weights. The test errors are respectively 0.5% and 2.5%,much larger than those of HCN. We then consider versions of our training set with different levels ofpixel-flipping noise. The evolution of the test error is shown in Fig. 8.(c). For the competing methodswe needed many random restarts to obtain good results. Their regularization parameter was chosenbased on the test set performance.4.4 M ULTI -LAYER HCN: MNIST DATAWe turn now to a problem with real data, the MNIST database (LeCun et al., 1998), which contains60000/10000 training/testing images of size 2828. We want to generalize from very few samples,so we only use the first 40 digits of each category to train. We pre-process each image with afixed set of 16 oriented filters, so that the inputs are a 16-channel image. We use a 2-layer HCNwith 32 templates per class and 64 lower level features of size 2626and two layers of 33pooling,p1W= 0:001;p2W= 0:05. These values are set a priori, not optimized. Then we test onboth the regular MNIST training set and different corrupted versions12of it (same preprocessing12See Appendix E for examples of each corruption type.(a) 16 training samples and labels (b)W1, no supervision (c) Missing value imputationFigure 7: Samples from synthetic data and results from unsupervised learning tasks.12Under review as a conference paper at ICLR 2017(a) Supervised, unsupervised(top, bottom) W2(b)W1, discriminative training10-310-210-1Noise level in the input image0.000.050.100.150.200.250.300.350.400.45Test errorGenerative HCNDiscriminative HCNCNN (c) Effect of increased noise levelFigure 8: Discriminative vs. generative training and supervised vs. unsupervised generative training.(a) LearnedW1by HCN (b) LearnedW2by HCNCorruption HCN CNNNone 11.15% 9.53%Noise 20.69% 39.28%Border 16.97% 17.78%Patches 14.52% 16.27%Grid 68.52% 82.69%Line clutter 37.22% 55.77%Deletion 22.03% 25.05%(c) Test error with different cor-ruptionsFigure 9: First layer of weights learned by HCN and CNN on the preprocessed MNIST dataset.and no retraining). We follow the same preprocessing and procedure using a regular CNN withdiscriminative training and explore different regularizations, architectures and activation types, onlyfixing the pooling sizes and number of layers to match the HCN. We select the parameterization thatminimizes the error on the clean test set. This CNN uses 96 low level features. Results for all testsets are reported on Fig. 9.(c). It can be seen that HCN generalizes better. The weights of the firstlayer of the HCN after training are shown in Fig. 9.(a). Notice how HCN is able to discover reusableparts of digits.The training time of HCN scales exactly as that of a CNN. It is linear in each of its architecturalparameters: Number of images, number of pixels per image, features at each layer, size of thosefeatures, etc. However, the forward and backward passes of an HCN are more complex and optimizedcode for them is not readily available as it is for a CNN, so a significant constant factor separatesthe running times of both. Training time for MNIST is around 17 hours on a single CPU. The RAMrequired to store all the messages for 400 training images in MNIST goes up to around 150GB. Toscale to bigger training sets, an online extension (see Section 4.2) needs to be used.5 C ONCLUSIONS AND FUTURE WORKWe have described the HCN, a hierarchical feature model with a rich prior and provided a novelmethod to solve the challenging learning problem it poses. The model effectively learns convolutionalfeatures and is interpretable and flexible. The learned weights are binary, which is advantageous forstorage and computation purposes (Courbariaux et al., 2015; Han et al., 2015). Future work entailsadding more structure to the prior, leveraging more refined MAP inference techniques, exploringother update schedules and further exploiting the generalization-without-retraining capabilities ofthis model.13Under review as a conference paper at ICLR 2017REFERENCESMatthew James Beal. Variational algorithms for approximate Bayesian inference . University ofLondon London, 2003.Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neuralnetworks with binary weights during propagations. In Advances in Neural Information ProcessingSystems , pp. 3105–3113, 2015.Sanja Fidler, Marko Boben, and Ales Leonardis. Learning a hierarchical compositional shapevocabulary for multi-class object representation. arXiv preprint arXiv:1408.5516 , 2014.Amir Globerson and Tommi S Jaakkola. Fixing max-product: Convergent message passing algorithmsfor MAP LP-relaxations. In Advances in Neural Information Processing Systems , pp. 553–560,2008.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems , pp. 2672–2680, 2014.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxoutnetworks. arXiv preprint arXiv:1302.4389 , 2013.Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networkswith pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015.Tom Heskes. Stable fixed points of loopy belief propagation are local minima of the bethe free energy.InAdvances in neural information processing systems , pp. 343–350, 2002.Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep beliefnets. Neural computation , 18(7):1527–1554, 2006.Tommi S Jaakkola and Michael I Jordan. Variational probabilistic inference and the qmr-dt network.Journal of artificial intelligence research , 10:291–322, 1999.Ya Jin and Stuart Geman. Context and hierarchy in a probabilistic image model. In 2006 IEEEComputer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) , volume 2,pp. 2145–2152. IEEE, 2006.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques . MITpress, 2009.Vladimir Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PatternAnalysis and Machine Intelligence, IEEE Transactions on , 28(10):1568–1583, 2006.Yann LeCun, L ́eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied todocument recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factoriza-tion. Nature , 401(6755):788–791, 1999.Talya Meltzer, Amir Globerson, and Yair Weiss. Convergent message passing algorithms - a unifyingview. In Jeff A. Bilmes and Andrew Y . Ng (eds.), UAI, pp. 393–401, 2009.Pauli Miettinen, Taneli Mielik ̈ainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discretebasis problem. In European Conference on Principles of Data Mining and Knowledge Discovery ,pp. 335–346. Springer, 2006.Tom Minka et al. Divergence measures and message passing. Technical report, 2005.Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXivpreprint arXiv:1402.0030 , 2014.14Under review as a conference paper at ICLR 2017Ankit B Patel, Tan Nguyen, and Richard G Baraniuk. A probabilistic theory of deep learning. arXivpreprint arXiv:1504.00641 , 2015.Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference . 1988.Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In ComputerVision Workshops (ICCV Workshops), 2011 IEEE International Conference on , pp. 689–690. IEEE,2011.Siamak Ravanbakhsh, Barnab ́as P ́oczos, and Russell Greiner. Boolean matrix factorization and noisycompletion via message passing. 2015.Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS , volume 1, pp.3, 2009.Shimony. Finding MAPs for belief networks is NP-hard. AIJ: Artificial Intelligence , 68, 1994.Zhangzhang Si and Song-Chun Zhu. Learning and-or templates for object recognition and detection.IEEE transactions on pattern analysis and machine intelligence , 35(9):2189–2205, 2013.Tom ́aˇsˇSingliar and Milo ˇs Hauskrecht. Noisy-or component analysis and its application to linkanalysis. Journal of Machine Learning Research , 7(Oct):2189–2213, 2006.Larry J Stockmeyer. The set basis problem is NP-complete . IBM Thomas J. Watson ResearchDivision, 1975.Huayan Wang and Koller Daphne. Subproblem-tree calibration: A unified approach to max-productmessage passing. In Proceedings of the 30th International Conference on Machine Learning(ICML-13) , pp. 190–198, 2013.Tom ́aˇs Werner. A linear programming approach to max-sum problem: A review. IEEE Trans. PatternAnalysis and Machine Intelligence , 29(7):1165–1179, July 2007.Christopher KI Williams and Nicholas J Adams. Dts: dynamic trees. Advances in neural informationprocessing systems , pp. 634–640, 1999.Ying Nian Wu, Zhangzhang Si, Haifeng Gong, and Song-Chun Zhu. Learning active basis model forobject detection and recognition. International journal of computer vision , 90(2):198–235, 2010.Long Zhu, Yuanhao Chen, Yifei Lu, Chenxi Lin, and Alan Yuille. Max margin and/or graph learningfor parsing the human body. In Computer Vision and Pattern Recognition, 2008. CVPR 2008.IEEE Conference on , pp. 1–8. IEEE, 2008.Long Zhu, Yuanhao Chen, Alan Yuille, and William Freeman. Latent hierarchical structural learningfor object detection. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conferenceon, pp. 1062–1069. IEEE, 2010.15Under review as a conference paper at ICLR 2017A R ELATED WORKThere is a plethora of previous works that address hierarchical feature learning, usually in the settingor real-valued images, as opposed to binary ones: Fidler et al. (2014); Zhu et al. (2008; 2010);Wu et al. (2010); Si & Zhu (2013); Poon & Domingos (2011). Many of those works explicitlyuse AND-OR graphs, in the same spirit as our work. The most outstanding difference, however,between previous works and HCN is that HCN allows multiple features to overlap, thus creatingnew compositions. For instance, if feature H is a centered horizontal line and feature V is a centeredvertical line, HCN can create a new feature “cross” that combines both, and the fact that both areoverlapping and sharing a common active pixel (and many common inactive pixels) is properlyhandled. In contrast, previously cited models cannot overlap features, so they partition the input spaceand dedicate separate subtrees to each of them, and do so recursively. We can see in Figure 5, top row,how we can generate 25 different cross variations using only two features. This would not be possiblewith any of the cited models, which would need to span each combination as a separate feature. Thisfundamental difference makes HCN combinatorially more powerful, but also less tractable. Bothlearning and inference become harder because feature reuse introduces the well-known “explainingaway” phenomenon (Hinton et al., 2006).As a side, note the difference between the meaning of “OR” as used in the present work and inprevious works on AND-OR graphs: what they call “OR”, is what we term POOL (an exclusivebottom-up OR of elements), whereas HCN has a novel third type of gate, the “OR” connection (anon-exclusive, top-down OR of elements) to be able to handle explaining away. Standard AND-OR(or more clearly, AND-POOL) graphs lack the top-down ORing and therefore are not able to handleexplaining away.In the compositional hierarchies of Fidler et al. (2014), the lack of feature reuse allows for inferenceto be exact, since the graphical model is tree-like. Features are learned using a heuristic that relieson the exact inference, similar in spirit to EM. The AND-OR template learning methods of (Zhuet al., 2008; 2010) use respectively max-margin and incremental concave-convex procedures tooptimize a discriminative score. Therefore they require supervision (unlike HCN) and a tractableinference procedure (to make the discriminative score easy to optimize), which again is achievedby not allowing overlapping features. The sum-product networks (SPNs) of (Poon & Domingos,2011) express features as product nodes. In order to achieve feature overlapping, two product nodesspanning the same set of pixels (but with possibly different activation patterns) should be activesimultaneously. This would violate the consistency requirement of SPNs, making HCN a morecompact way to express feature overlap13(with the price to be paid being lack of exact inference).The AND-OR template (AOT) learning of (Wu et al., 2010) again cannot deal properly with thegeneration of superimposed features, having to create new features to handle every combination. InSection B we will compare AOT feature learning and HCN feature learning and check how theselimitations make AOT unable to disentangle the generative features.Grammars exclude the sharing of sub-parts among multiple objects or object parts in a parse of thescene (Jin & Geman, 2006), and they limit interpretations to single trees even if those are dynamic(Williams & Adams, 1999). Our graphical model formulation makes the sharing of lower-levelfeatures explicit by using local conditional probability distributions for multi-parent interactions, andallows for MAP configurations (i.e, the parse graphs) that are not trees.The deep rendering model (DRM) of Patel et al. (2015) is, to some extent, a continuous counterpartof the present work. Although DRMs allow for feature overlap, the semantics are different: in HCNthe amount of activation of a given pixel is the same whether there are one or many features (causes)activating it, whereas in DRM the activation is proportional to the number of causes. This means thatthe difference between DRM and HCN is analogous to the difference between principal componentanalysis and binary matrix factorization: while the first can be solved analytically, the second is hardand not analytically tractable. This results in DRM being more tractable, but less appropriate tohandle problems with binary events with multiple causes, such as the ones posed in this paper.Two popular approaches to handle learning in generative models, largely independent of the modelitself, are variational autoencoders (V AEs) and generative adversarial networks (GANs). We are not13An exponentially big SPN could indeed encode an HCN.16Under review as a conference paper at ICLR 2017(a) Filter bank (b) Training samples (c) HCN features (d) Features from Wuet al. (2010)Figure 10: Results of training a modified HCN on a grayscale image. A filter bank is convolved withthe input image to provide the bottom up messages to each channel of HCN. The filter bank sizes inthis simple example are adapted to match those of generation. As a benchmark, Wu et al. (2010) isused on the same data and is also given knowledge of the filter bank in use. Top row: 33filter size.Bottom row: 77filter size.aware of any work that uses a V AE or GAN with a generative model like HCN and such an option isunlikely to be straightforward.Most common V AEs rely on the reparameterization trick for variance reduction. However, thistrick cannot be applied to HCN due to the discrete nature of its variables, and alternative methodswould suffer from high variance. Another limitation of V AEs wrt HCN is that they perform a singlebottom-up pass and lack of explaining away: HCN combines top-down and bottom-up information inmultiple passes, isolating the parent cause of a given activation, instead of activating every possiblecause.GANs need to compute rWD(GW("))whereD()is the discriminative network and GW(")is agenerative network parameterized by the features W. In this case, not only Wis binary, but also thegenerated reconstructions at every layer, so the GAN formulation cannot be applied to HCN as-is.One could in principle relax the binary assumption of features and reconstructions and use the GANparadigm to train a neural network with sigmoidal activations, but it is unclear that the lack of binaryvariables will still produce proper disentangling (the convolutional extension of NOCA also has thisproblem due to the use of non-binary features and produces results that are inferior to HCN).B C OMBINING WITH GRAYSCALE PREPROCESSINGThe HCN is a binary model. However, to process real-valued data, it can be coupled with aninitial grayscale-to-binary preprocessing step to do feature detection. We tested this by generating agrayscale version of our toy data and then computing the bottom-up messages to S0by convolvingthe input image with a filter bank. This is roughly equivalent to replacing the noisy binary channelof HCN with a Gaussian channel. We used 16 preprocessing filters, which means that S0has 16channels. 200 training images (unsupervised) were used. Two filter sizes, 33and77were tested.We also run the AOT feature learning method of Wu et al. (2010) on the same data for comparison.The results of training on 200 training images (unsupervised) is provided in Figure 10. When thelarger filter is used, the diagonal bars are harder to identify so their disentangling is poorer.C M AX-PRODUCT MESSAGE PASSING (MPMP)The HCN model can be expressed both as a directed Bayesian network or as a factor graph usingonly POOL, AND, and OR factors, each involving a small number of local binary variables. Both17Under review as a conference paper at ICLR 2017learning and ulterior classification can be cast as MAP inference in this factor graph. Other tasks,such as filling in unknown image data can also be performed by MAP inference.MAP inference can be performed exactly on factor graphs without loops (trees) in linear time, but itis an NP-hard problem for arbitrary graphs (Shimony, 1994). The factor graph describing our modelis highly structured, but also very loopy.There is large body of works (Wang & Daphne, 2013; Meltzer et al., 2009; Globerson & Jaakkola,2008; Kolmogorov, 2006; Werner, 2007), addressing the problem of MAP inference in loopy factorgraphs. Perhaps the simplest of these methods is the max-product algorithm, a variant of dynamicprogramming proposed in (Pearl, 1988) to find the MAP configuration in trees.The max-product algorithm defines a set of messagesma!i(yi)going from each factor ato each ofits variablesyi. The sum of the messages incoming to a variable (yi) =Pa:yi2yama!i(yi)definesits approximate max-marginal14(yi). The max-product algorithm then proceeds by updating theoutgoing messages from each factor in turn so as to make the approximate max-marginals consistentwith that factor. This algorithm is not guaranteed to converge if there are loops in the graph, and if itdoes, it is not guaranteed to find the MAP configuration. Damping the updates of the factors has beenshown to improve convergence in loopy belief propagation (Heskes, 2002) and was justified as localdivergence minimization in (Minka et al., 2005). Using a damping factor 0<1for max-product,the update rule ismt+1a!i(yi) = (1)mta!i(yi) +maxyaniloga(yi;yani) +Xyj2yanimta!j(yj) + (3)and the original update rule is recovered for = 1. The valueis arbitrary and does not affect thealgorithm. We select it to make mt+1a!i(yi= 0) = 0 , so that messages can be stored as a single scalar.When storing messages in this way, their sum provides the max-marginal difference, which is enoughfor our purposes.Eq.(3)can be computed exactly for the three type of factors appearing in our graph, so messageupdating can be performed in closed form. Despite the graph of our model being very loopy, itturns out that a careful choice of message initialization, damping and parallel and sequential updatesproduces satisfactory results in our experiments. For further details about max-product inference andMAP inference via message passing in discrete graphical models we refer the reader to (Koller &Friedman, 2009).D M AX-PRODUCT MESSAGE UPDATES FOR AND, OR AND POOL FACTORSIn the following we provide the message update equations for the different types of factors used in themain paper. The messages are in normalized form: each message is a single scalar and correspondsto the difference between the unnormalized message value evaluated at 1 and the unnormalizedmessage value evaluated at 0. For each update we assume that the incoming messages mIN()for allthe variables of the factor are available. The incoming messages are the sum of all messages going tothat variable except for the one from the factor under consideration.The outgoing messages are well-defined even for 1 incoming messages, by taking the correspond-ing limit in the expressions below.D.1 AND FACTORBottom-up messagesmOUT(t1) = max(0;mIN(t2) +mIN(b))max(0;mIN(t2))mOUT(t2) = max(0;mIN(t1) +mIN(b))max(0;mIN(t1))Top-down messagemOUT(b) = min( mIN(t1) +mIN(t2);mIN(t1);mIN(t2))14The max-marginal of a variable in a factor graph gives the maximum value attainable in that factor graphfor each value of that variable.18Under review as a conference paper at ICLR 2017t1t2bAND(a) AND factorPOOLb1b2bMt (b) POOL factort1t2bORtM (c) OR factorFigure 11: Factors and variable labeling used in the message update equations.D.2 POOL FACTORBottom-up messagemOUT(t) = max( mIN(b1);:::; mIN(bM))logMTop-down messagesmOUT(bm) = min( mIN(t)logM;maxj6=mmIN(bj))D.3 OR FACTORBottom-up messagesmOUT(tm) = min( mIN(b) +Xj6=mmax(0;mIN(tj));max(0;mIN(ti))mIN(ti))withi= argmaxi6=mmIN(ti)Top-down messagemOUT(b) =mIN(ti) +Xj6=imax(0;mIN(tj))withi= argmaxmmIN(tm)19Under review as a conference paper at ICLR 2017E I MAGE CORRUPTION TYPE ILLUSTRATIONThe different types of image corruption used in Section 4.4 are shown in the following Figure:Figure 12: Different types of noise corruption used in Section 4.4.20
HJyJwIE4x
HkNEuToge
ICLR.cc/2017/conference/-/paper609/official/review
{"title": "", "rating": "5: Marginally below acceptance threshold", "review": "This paper proposes sparse coding problem with cosine-loss and integrated it as a feed-forward layer in a neural network as an energy based learning approach. The bi-directional extension makes the proximal operator equivalent to a certain non-linearity (CReLu, although unnecessary). The experiments do not show significant improvement against baselines. \n\nPros: \n- Minimizing the cosine-distance seems useful in many settings where compute inner-product between features are required. \n- The findings that the bidirectional sparse coding is corresponding to a feed-forward net with CReLu non-linearity. \n\nCons:\n- Unrolling sparse coding inference as a feed-foward network is not new. \n- The class-wise encoding makes the algorithm unpractical in multi-class cases, due to the requirement of sparse coding net for each class. \n- It does not show the proposed method could outperform baseslines in real-world tasks.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Energy-Based Spherical Sparse Coding
["Bailey Kong", "Charless C. Fowlkes"]
In this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors and reconstructions are evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification.
["spherical sparse coding", "spherical sparse", "efficient variant", "convolutional sparse", "reconstructions", "inner product", "cosine distance", "codes", "discriminative classification"]
https://openreview.net/forum?id=HkNEuToge
https://openreview.net/pdf?id=HkNEuToge
https://openreview.net/forum?id=HkNEuToge&noteId=HJyJwIE4x
Under review as a conference paper at ICLR 2017ENERGY -BASED SPHERICAL SPARSE CODINGBailey Kong and Charless C. FowlkesDepartment of Computer ScienceUniversity of California, IrvineIrvine, CA 92697 USAfbhkong,fowlkes g@ics.uci.eduABSTRACTIn this paper, we explore an efficient variant of convolutional sparse coding withunit norm code vectors where reconstruction quality is evaluated using an innerproduct (cosine distance). To use these codes for discriminative classification, wedescribe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) inwhich the hypothesized class label introduces a learned linear bias into the codingstep. We evaluate and visualize performance of stacking this encoder to make adeep layered model for image classification.1 I NTRODUCTIONSparse coding has been widely studied as a representation for images, audio and other vectorial data.This has been a highly successful method that has found its way into many applications, from signalcompression and denoising (Donoho, 2006; Elad & Aharon, 2006) to image classification (Wrightet al., 2009), to modeling neuronal receptive fields in visual cortex (Olshausen & Field, 1997). Sinceits introduction, subsequent works have brought sparse coding into the supervised learning settingby introducing classification loss terms to the original formulation to encourage features that are notonly able to reconstruct the original signal but are also discriminative (Jiang et al., 2011; Yang et al.,2010; Zeiler et al., 2010; Ji et al., 2011; Zhou et al., 2012; Zhang et al., 2013).While supervised sparse coding methods have been shown to find more discriminative features lead-ing to improved classification performance over their unsupervised counterparts, they have receivedmuch less attention in recent years and have been eclipsed by simpler feed-forward architectures.This is in part because sparse coding is computationally expensive. Convex formulations of sparsecoding typically consist of a minimization problem over an objective that includes a least-squares(LSQ) reconstruction error term plus a sparsity inducing regularizer.Because there is no closed-form solution to this formulation, various iterative optimization tech-niques are generally used to find a solution (Zeiler et al., 2010; Bristow et al., 2013; Yang et al.,2013; Heide et al., 2015). In applications where an approximate solution suffices, there is workthat learns non-linear predictors to estimate sparse codes rather than solve the objective more di-rectly (Gregor & LeCun, 2010). The computational overhead for iterative schemes becomes quitesignificant when training discriminative models due to the demand of processing many training ex-amples necessary for good performance, and so sparse coding has fallen out of favor by not beingable to keep up with simpler non-iterative coding methods.In this paper we introduce an alternate formulation of sparse coding using unit length codes anda reconstruction loss based on the cosine similarity. Optimal sparse codes in this model can becomputed in a non-iterative fashion and the coding objective lends itself naturally to embedding ina discriminative, energy-based classifier which we term energy-based spherical sparse coding (EB-SSC) . This bi-directional coding method incorporates both top-down and bottom-up informationwhere the features representation depends on both a hypothesized class label and the input signal.Like Cao et al. (2015), our motivation for bi-directional coding comes from the “Biased CompetitionTheory”, which suggests that visual processing can be biased by other mental processes (e.g., top-down influence) to prioritize certain features that are most relevant to current task. Fig. 1 illustratesthe flow of computation used by our SSC and EB-SSC building blocks compared to a standardfeed-forward layer.1Under review as a conference paper at ICLR 2017Our energy based approach for combining top-down and bottom-up information is closely tied tothe ideas of Larochelle & Bengio (2008); Ji et al. (2011); Zhang et al. (2013); Li & Guo (2014)—although the model details are substantially different (e.g., Ji et al. (2011) and Zhang et al. (2013)use sigmoid non-linearities while Li & Guo (2014) use separate representations for top-down andbottom-up information). The energy function of Larochelle & Bengio (2008) is also similar butincludes an extra classification term and is trained as a restricted Boltzmann machine.ReLUNeg. ReLUConcatenation Convolution x(a) CReLUReLUNeg. ReLUConcatenation Normalization Convolution x (b) SSCyReLUNeg. ReLUNeg. Class BiasPos. Class BiasNormalization Concatenation Convolution x (c) EB-SSCFigure 1: Building blocks for coding networks explored in this paper. Our coding model usesnon-linearities that are closely related to the standard ReLU activation function. (a) Keeping bothpositive and negative activations provides a baseline feed-forward model termed concatenated ReLU(CReLU). (b) Our spherical sparse coding layer has a similar structure but with an extra bias andnormalization step. Our proposed energy-based model uses (c) energy-based spherical sparse coding(EB-SSC) blocks that produces sparse activations which are not only positive and negative, but areclass-specific. These blocks can be stacked to build deeper architectures.1.1 N OTATIONMatrices are denoted as uppercase bold (e.g., A), vectors are lowercase bold (e.g., a), and scalarsare lowercase (e.g., a). We denote the transpose operator with|, the element-wise multiplicationoperator with, the convolution operator with , and the cross-correlation operator with ?. For vec-tors where we dropped the subscript k(e.g.,dandz), we refer to a super vector with Kcomponentsstacked together (e.g., z= [z|1;:::;z|K]|).2 E NERGY -BASED SPHERICAL SPARSE CODINGEnergy-based models capture dependencies between variables using an energy function that measurethe compatibility of the configuration of variables (LeCun et al., 2006). To measure the compatibilitybetween the top-down and bottom-up information, we define the energy function of EB-SSC to bethe sum of bottom-up coding term and a top-down classification term:E(x;y;z) =Ecode(x;z) +Eclass(y;z): (1)The bottom-up information (input signal x) and the top-down information (class label y) are tiedtogether by a latent feature map z.2.1 B OTTOM -UPRECONSTRUCTIONTo measure the compatibility between the input signal xand the latent feature maps z, we introducea novel variant of sparse coding that is amenable to efficient feed-forward optimization. While theidea behind this variant can be applied to either patch-based or convolutional sparse coding, wespecifically use the convolutional variant that shares the burden of coding an image among nearbyoverlapping dictionary elements. Using such a shift-invariant approach avoids the need to learn dic-tionary elements which are simply translated copies of each other, freeing up resources to discovermore diverse and specific filters (see Kavukcuoglu et al. (2010)).2Under review as a conference paper at ICLR 2017Convolutional sparse coding (CSC) attempts to find a set of dictionary elements fd1;:::;dKgandcorresponding sparse codes fz1;:::;zKgso that the resulting reconstruction, r=PKk=1dkzkaccurately represents the input signal x. This is traditionally framed as a least-squares minimizationwith a sparsity inducing prior on z:arg minzkxKXk=1dkzkk22+kzk1: (2)Unlike standard feed-forward CNN models that convolve the input signal xwith the filters, thisenergy function corresponds to a generative model where the latent feature maps fz1;:::;zKgareconvolved with the filters and compared to the input signal (Bristow et al., 2013; Heide et al., 2015;Zeiler et al., 2010).To motivate our novel variant of CSC, consider expanding the squared reconstruction error kxrk22=kxk222x|r+krk22. If we constrain the reconstruction rto have unit norm, the recon-struction error depends entirely on the inner product between xandrand is equivalent to the cosinesimilarity (up to additive and multiplicative constants). This suggests the closely related unit-lengthreconstruction problem:arg maxzx|KXk=1dkzkkzk1 (3)s.t.KXk=1dkzk21In Appendix A we show that, given an optimal unit length reconstruction rwith correspondingcodes z, the solution to the least squares reconstruction problem (Eq. 2) can be computed by asimple scaling r= (x|r2kzk1)r.The unit-length reconstruction problem is no easier than the original least-squares optimization dueto the constraint on the reconstruction which couples the codes for different filters. Instead considera simplified constraint on zwhich we refer to as spherical sparse coding (SSC) :arg maxkzk21Ecode(x;z) = arg maxkzk21x|KXk=1dkzkkzk1: (4)In 2.3 below, we show that the solution to this problem can be found very efficiently without requir-ing iterative optimization.This problem is a relaxation of convolutional sparse coding since it ignores non-orthogonal inter-actions between the dictionary elements1. Alternately, assuming unit norm dictionary elements, thecode norm constraint can be used to upper-bound the reconstruction length. We have by the triangleand Young’s inequality that:Xkdkzk2Xkkdkzkk2Xkkdkk1kzkk1DXkkzkk2 (5)where the factor Dis the dimension of zkand arises from switching from the 1-norm to the 2-norm.SinceDPkkzkk21is a tighter constraint we havemaxkPkdkzkk21Ecode(x;z) maxPkkzkk21DEcode(x;z) (6)However, this relaxation is very loose, primarily due to the triangle inequality. Except in specialcases (e.g., if the dictionary elements have disjoint spectra) the SSC codes will be quite differentfrom the standard least-squares reconstruction.1We note that our formulation is also closely related to the dynamical model suggested by Rozell et al.(2008), but without the dictionary-dependent lateral inhibition between feature maps. Lateral inhibition cansolve the unit-length reconstruction formulation of standard sparse coding but requires iterative optimization.3Under review as a conference paper at ICLR 20172.2 T OP-DOWN CLASSIFICATIONTo measure the compatibility between the class label yand the latent feature maps z, we use a setof one-vs-all linear classifiers. To provide more flexibility, we generalize this by splitting the codevector into positive and negative components:zk=z+k+zkz+k0zk0and allow the linear classifier to operate on each component separately. We express the classifierscore for a hypothesized class label yby:Eclass(y;z) =KXk=1w+|yz+k+KXk=1w|yzk: (7)The classifier thus is parameterized by a pair of weight vectors ( w+ykandwyk) for each class labelyandk-th channel of the latent feature map.This splitting, sometimes referred to as full-wave rectification, is useful since a dictionary elementand its negative do not necessarily have opposite visual semantics. This splitting also allows theclassifier the flexibility to assign distinct meanings or alternately be completely invariant to contrastreversal depending on the problem domain. For example, Shang et al. (2016) found CNN modelswith ReLU non-linearities which discard the negative activations tend to learn pairs of filters whichare related by negation. Keeping both positive and negative responses allowed them to halve thenumber of dictionary elements.We note that it is also straightforward to introduce spatial average pooling prior to classification byintroducing a fixed linear operator Pused to pool the codes (e.g., w+|yPz+k). This is motivated bya variety of hand-engineered feature extractors and sparse coding models, such as Ren & Ramanan(2013), which use spatially pooled histograms of sparse codes for classification. This fixed poolingcan be viewed as a form of regularization on the linear classifier which enforces shared weights overspatial blocks of the latent feature map. Splitting is also quite important to prevent information losswhen performing additive pooling since positive and negative components of zkcan cancel eachother out.2.3 C ODINGBottom-up reconstruction and top-down classification each provide half of the story, coupled by thelatent feature maps. For a given input xand hypothesized class y, we would like to find the optimalactivations zthat maximize the joint energy function E(x;y;z). This requires solving the followingoptimization:arg maxkzk21x|KXk=1dkzkkzk1+KXk=1w+|ykz+k+KXk=1w|ykzk; (8)where x2RDis an image and y2Y is a class hypothesis. zk2RFis thek-th componentlatent variable being inferred; z+kandzkare the positive and negative coefficients of zk, such thatzk=z+k+zk. The parameters dk2RM,w+yk2RF, andwyk2RFare the dictionary filter,positive coefficient classifier, and negative coefficient classifier for the k-th component respectively.A key aspect of our formulation is that the optimal codes can be found very efficiently in closed-form—in a feed-forward manner (see Appendix B for a detailed argument).2.3.1 A SYMMETRIC SHRINKAGETo describe the coding processes, let us first define a generalized version of the shrinkage functioncommonly used in sparse coding. Our asymmetric shrinkage is parameterized by upper and lowerthresholds+shrink (+;)(v) =8<:v+ifv+>00 otherwisev+ifv+<0(9)4Under review as a conference paper at ICLR 2017(a)0+(b)0 +(c)+0 (d)0 +Figure 2: Comparing the behavior of asymmetric shrinkage for different settings of +and.(a)-(c) satisfy the condition that +while (d) does not.Fig. 2 shows a visualization of this function which generalizes the standard shrinkage proximaloperator by allowing for the positive and negative thresholds. In particular, it corresponds to theproximal operator for a version of the `1-norm that penalizes the positive and negative componentswith different weights jvjasym =+kv+k1+kvk1. The standard shrink operator correspondstoshrink (;)(v)while the rectified linear unit common in CNNs is given by a limiting caseshrink (0;1)(v). We note that+is required for shrink (+;)to be a proper function(see Fig. 2).2.3.2 F EED-FORWARD CODINGWe now describe how codes can be computed in a simple feed-forward pass. Let+yk=w+yk;yk=wyk(10)be vectors of positive and negative biases whose entries are associated with a spatial location in thefeature map kfor classy. The optimal code zcan be computed in three sequential steps:1. Cross-correlate the data with the filterbank dk?x2. Apply an asymmetric version of the standard shrinkage operator~zk= shrink(+yk;yk)(dk?x) (11)where, with abuse of notation, we allow the shrinkage function (Eq. 9) to apply entriesin the vectors of threshold parameter pairs +yk;ykto the corresponding elements of theargument.3. Project onto the feasible set of unit length codesz=~zk~zk2: (12)2.3.3 R ELATIONSHIP TO CNN S:We note that this formulation of coding has a close connection to single layer convolutional neuralnetwork (CNN). A typical CNN layer consists of convolution with a filterbank followed by a non-linear activation such as a rectified linear unit (ReLU). ReLUs can be viewed as another way ofinducing sparsity, but rather than coring the values around zero like the shrink function, ReLUtruncates negative values. On the other hand, the asymmetric shrink function can be viewed as thesum of two ReLUs applied to appropriately biased inputs:shrink (+;)(x) = ReLU(x+)ReLU((x+));SSC coding can thus be seen as a CNN in which the ReLU activation has been replaced with shrink-age followed by a global normalization.5Under review as a conference paper at ICLR 20173 L EARNINGWe formulate supervised learning using the softmax log-loss that maximizes the energy for the trueclass labelyiwhile minimizing energy of incorrect labels y.arg mind;w+;w;02(kw+k22+kwk22+kdk22)+1NNXi=1[maxkzk21E(xi;yi;z) + logXy2Ymaxkzk21eE(xi;y;z)]s.t.(wyk)(w+yk)8y;k; (13)whereis the hyperparameter regularizing w+y,wy, andd. We constrain the relationship betweenand the entries of w+yandwyin order for the asymmetric shrinkage to be a proper function (seeSec. 2.3.1 and Appendix B for details).In classical sparse coding, it is typical to constrain the `2-norm of each dictionary filter to unit length.Our spherical coding objective behaves similarly. For any optimal code z, there is a 1-dimensionalsubspace of parameters for which zis optimal given by scaling dinversely to w,. For simplicityof the implementation, we opt to regularize dto assure a unique solution. However, as Tygert et al.(2015) point out, it may be advantageous from the perspective of optimization to explicitly constrainthe norm of the filter bank.Note that unlike classical sparse coding, where is a hyperparameter that is usually set using cross-validation, we treat it as a parameter of the model that is learned to maximize performance.3.1 O PTIMIZATIONIn order to solve Eq. 13, we explicitly formulate our model as a directed-acyclic-graph (DAG) neuralnetwork with shared weights, where the forward-pass computes the sparse code vectors and thebackward-pass updates the parameter weights. We optimize the objective using stochastic gradientdescent (SGD).As mentioned in Sec. 2.3 shrinkage function is assymetric with parameters +ykorykas definedin Eq. 10. However, the inequality constraint on their relationship to keep the shrinkage function aproper function is difficult to enforce when optimizing with SGD. Instead, we introduce a centraloffset parameter and reduce the ordering constraint to pair of positivity constraints. Let^w+yk=+ykbk ^wyk=yk+bk (14)be the modified linear “classifiers” relative to the central offset bk. It is straightforward to see thatif+ykandykthat satisfy the constrain in Eq. 13, then adding the same value to both sides ofthe inequality will not change that. However, taking bkto be a midpoint between them, then both+ykbkandyk+bkwill be strictly non-negative.Using this variable substitution, we rewrite the energy function (Eq. 1) asE0(x;y;z) =x|KXk=1dkzk+KXk=1bk1|zkKXk=1^w+|ykz+k+KXk=1^w|ykzk: (15)where bis constant offset for each code channel. The modified linear “classification” terms nowtake on a dual role of inducing sparsity and measuring the compatibility between zandy.This yields a modified learning objective that can easily be solved with existing implementations forlearning convolutional neural nets:arg mind;^w+;^w;b2(k^w+k22+k^wk22+kdk22)+1NNXi=1[maxkzk21E0(xi;yi;z) + logXy2Ymaxkzk21eE0(xi;y;z)]s.t.^w+yk;^wyk08y;k; (16)6Under review as a conference paper at ICLR 2017where ^w+and^ware the new sparsity inducing classifiers, and bare the arbitrary origin points. Inparticular, adding the Korigin points allows us to enforce the constraint by simply projecting ^w+and^wonto the positive orthant during SGD.3.1.1 S TACKING BLOCKSWe also examine stacking multiple blocks of our energy function in order to build a hierarchicalrepresentation. As mentioned in Sec. 3.1.1, the optimal codes can be computed in a simple feed-forward pass—this applies to shallow versions of our model. When stacking multiple blocks of ourenergy-based model, solving for the optimal codes cannot be done in a feed-forward pass since thecodes for different blocks are coupled (bilinearly) in the joint objective. Instead, we can proceedin an iterative manner, performing block-coordinate descent by repeatedly passing up and down thehierarchy updating the codes. In this section we investigate the trade-off between the number ofpasses used to find the optimal codes for the stacked model and classification performance.For this purpose, we train multiple instances of a 2-block version of our energy-based model thatdiffer in the number of iterations used when solving for the codes. For recurrent networks such asthis, inference is commonly implemented by “unrolling” the network, where the parts of the net-work structure are repeated with parameters shared across these repeated parts to mimic an iterativealgorithm that stops at a fixed number of iterations rather than at some convergence criteria.0 10 20 30 40 50epoch10-310-210-1100101train objective (log-scale)not unrolledunrolled 1unrolled 2unrolled 3unrolled 4(a) Train Objective0 10 20 30 40 50epoch00.020.040.060.080.10.12test errornot unrolledunrolled 1unrolled 2unrolled 3unrolled 4 (b) Test ErrorFigure 3: Comparing the effects of unrolling a 2-block version of our energy-based model. (Bestviewed in color.)In Fig. 3, we compare the performance between models that were unrolled zero to four times. Wesee that there is a difference in performance based on how many sweeps of the variables are made.In terms of the training objective, more unrolling produces models that have lower objective valueswith convergence after only a few passes. In terms of testing error, however, we see that full codeinference is not necessarily better, as unrolling once or twice has lower errors than unrolling threeor four times. The biggest difference was between not unrolling and unrolling once, where both thetraining objective and testing error goes down. The testing error decreases from 0.0131 to 0.0074.While there is a clear benefit in terms of performance for unrolling at least once, there is also atrade-off between performance and computational resource, especially for deeper models.4 E XPERIMENTSWe evaluate the benefits of combining top-down and bottom-up information to produce class-specific features on the CIFAR-10 (Krizhevsky & Hinton, 2009) dataset using a deep version ofour EB-SSC. All experiments were performed using MatConvNet (Vedaldi & Lenc, 2015) frame-work with the ADAM optimizer (Kingma & Ba, 2014). The data was preprocessed and augmentedfollowing the procedure in Goodfellow et al. (2013). Specifically, the data was made zero mean andwhitened, augmented with horizontal flips (with a 0.5 probability) and random cropping. No weightdecay was used, but we used a dropout rate of 0:3before every convolution layer except for the first.For these experiments we consider a single forward pass (no unrolling).7Under review as a conference paper at ICLR 2017Base Networkblock kernel, stride, padding activationconv1 33396;1;1 ReLU/CReLUconv2 3396=19296;1;1 ReLU/CReLUpool1 33;2;1 maxconv3 3396=192192;1;1 ReLU/CReLUconv4 33192=384192;1;1ReLU/CReLUconv5 33192=384192;1;1ReLU/CReLUpool2 33;2;1 maxconv6 33192=384192;1;1ReLU/CReLUconv7 11192=384192;1;1ReLU/CReLUTable 1: Underlying block architecture common across all models we evaluated. SSC networksadd an extra normalization layer after the non-linearity. And EB-SSC networks insert class-specificbias layers between the convolution layer and the non-linearity. Concatenated ReLU (CReLU) splitspositive and negative activations into two separate channels rather than discarding the negative com-ponent as in the standard ReLU.4.1 C LASSIFICATIONWe compare our proposed EB-SSC model to that of Springenberg et al. (2015), which uses rectifiedlinear units (ReLU) as its non-linearity. This model can be viewed as a basic feed-forward versionof our proposed model which we take as a baseline. We also consider variants of the baseline modelthat utilize a subset of architectural features of our proposed model (e.g., concatenated rectifiedlinear units (CReLU) and spherical normalization (SN)) to understand how subtle design changes ofthe network architecture affects performance.We describe the model architecture in terms of the feature extractor and classifier. Table 1 shows theoverall network architecture of feature extractors, which consist of seven convolution blocks and twopooling layers. We test two possible classifiers: a simple linear classifier (LC) and our energy-basedclassifier (EBC), and use softmax-loss for all models. For linear classifiers, a numerical subscriptindicates which of the seven conv blocks of the feature extractor is used for classification (e.g., LC 7indicates the activations out of the last conv block is fed into the linear classifier). For energy-basedclassifiers, a numerical subscript indicates which conv blocks of the feature extractor are replacewith a energy-based classifier (e.g., EBC 67indicates the activations out of conv5 is fed into theenergy-based classifier and the energy-based classifier has a similar architecture to the conv blocksit replaces). The notation differ because for energy-based classifiers, the optimal activations are afunction of the hypothesized class label, whereas for linear classifiers, they are not.Model Train Err. (%) Test Err. (%) # paramsReLU+LC 7 1.20 11.40 1.3MCReLU+LC 7 2.09 10.17 2.6MCReLU(SN)+LC 7 0.99 9.74 2.6MSSC+LC 7 0.99 9.77 2.6MSSC+EBC 67 0.21 9.23 3.2MTable 2: Comparison of the baseline ReLU+LC 7model, its derivative models, and our proposedmodel on CIFAR-10.The results shown in Table 2 compare our proposed model to the baselines ReLU+LC 7(Springen-berg et al., 2015) and CReLU+LC 7(Shang et al., 2016), and to intermediate variants. The base-line models all perform very similarly with some small reductions in error rates over the baselineCReLU+LC 7. However, CReLU+LC 7reduces the error rate over ReLU+LC 7by more than onepercent (from 11.40% to 10.17%), which confirms the claims by Shang et al. (2016) and demon-strates the benefits of splitting positive and negative activations. Likewise, we see further decreasein the error rate (to 9.74%) from using spherical normalization. Though normalizing the activationsdoesn’t add any capacity to the model, this improved performance is likely because scale-invariantactivations makes training easier. On the other hand, further sparsifying the activations yielded no8Under review as a conference paper at ICLR 2017benefit. We tested values =f0:001;0:01gand found 0:001to perform better. Replacing the linearclassifier with our energy-based classifier further decreases the error rate by another half percent (to9.23%).4.2 D ECODING CLASS -SPECIFIC CODESA unique aspect of our model is that it is generative in the sense that each layer is explicitly trying toencode the activation pattern in the prior layer. Similar to the work on deconvolutional networks builton least-squares sparse coding (Zeiler et al., 2010), we can synthesize input images from activationsin our spherical coding network by performing repeated deconvolutions (transposed convolutions)back through the network. Since our model is energy based, we can further examine how the top-down information of a hypothesized class effects the intermediate activations.Figure 4: The reconstruction of an airplane image from different levels of the network (rows) acrossdifferent hypothesized class labels (columns). The first column is pure reconstruction, i.e., unbiasedby a hypothesized class label, the remaining columns show reconstructions of the learned class biasat each layer for one of ten possible CIFAR-10 class labels. (Best viewed in color.)The first column in Fig. 4 visualizes reconstructions of a given input image based on activationsfrom different layers of the model by convolution transpose. In this case we put in zeros for classbiases (i.e., no top-down) and are able to recover high fidelity reconstructions of the input. In theremaining columns, we use the same deconvolution pass to construct input space representations ofthe learned classifier biases. At low levels of the feature hierarchy, these biases are spatially smoothsince the receptive fields are small and there is little spatial invariance capture in the activations. Athigher levels these class-conditional bias fields become more tightly localized.Finally, in Fig. 5 we shows decodings from the conv2 and conv5 layer of the EB-SSC model for agiven input under different class hypotheses. Here we subtract out the contribution of the top-downbias term in order to isolate the effect of the class conditioning on the encoding of input features.As visible in the figure, the modulation of the activations focused around particular regions of theimage and the differences across class hypotheses becomes more pronounced at higher layers of thenetwork.5 C ONCLUSIONWe presented an energy-based sparse coding method that efficiently combines cosine similarity,convolutional sparse coding, and linear classification. Our model shows a clear mathematical con-nection between the activation functions used in CNNs to introduce sparsity and our cosine similar-ity convolutional sparse coding formulation. Our proposed model outperforms the baseline modeland we show which attributes of our model contributes most to the increase in performance. Wealso demonstrate that our proposed model provides an interesting framework to probe the effects ofclass-specific coding.REFERENCESHilton Bristow, Anders Eriksson, and Simon Lucey. Fast convolutional sparse coding. In ComputerVision and Pattern Recognition (CVPR) , 2013.9Under review as a conference paper at ICLR 2017(a) conv2 (b) conv5Figure 5: Visualizing the reconstruction of different input images (rows) for each of 10 differentclass hypotheses (cols) from the 2nd and 5th block activations for a model trained on MNIST digitclassification.Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, LiangWang, Chang Huang, Wei Xu, et al. Look and think twice: Capturing top-down visual attentionwith feedback convolutional neural networks. In International Conference on Computer Vision(ICCV) , 2015.David L Donoho. Compressed sensing. IEEE Transactions on information theory , 2006.Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image processing , 2006.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max-out networks. In International conference on Machine learning (ICML) , 2013.Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In InternationalConference on Machine Learning (ICML) , 2010.Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparsecoding. In Computer Vision and Pattern Recognition (CVPR) , 2015.Zhengping Ji, Wentao Huang, G. Kenyon, and L.M.A. Bettencourt. Hierarchical discriminativesparse coding via bidirectional connections. In International Joint Converence on Neural Net-works (IJCNN) , 2011.Zhuolin Jiang, Zhe Lin, and Larry S Davis. Learning a discriminative dictionary for sparse codingvia label consistent K-SVD. In Computer Vision and Pattern Recognition (CVPR) , 2011.Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha ̈el Mathieu, and Yann LCun. Learning convolutional feature hierarchies for visual recognition. In Advances in neuralinformation processing systems (NIPS) , 2010.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International conference on Machine learning (ICML) , 2008.10Under review as a conference paper at ICLR 2017Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-basedlearning. Predicting structured data , 2006.Xin Li and Yuhong Guo. Bi-directional representation learning for multi-label classification. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases (ECMLKDD) . 2014.Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision research , 1997.Xiaofeng Ren and Deva Ramanan. Histograms of sparse codes for object detection. In ComputerVision and Pattern Recognition (CVPR) , 2013.Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. Sparse codingvia thresholding and local competition in neural circuits. Neural computation , 2008.Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improvingconvolutional neural networks via concatenated rectified linear units. In International conferenceon Machine learning (ICML) , 2016.J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for simplicity: Theall convolutional net. In International conference on Learning Representations (ICLR) (workshoptrack) , 2015.Mark Tygert, Arthur Szlam, Soumith Chintala, Marc’Aurelio Ranzato, Yuandong Tian, and Woj-ciech Zaremba. Convolutional networks and learning invariant to homogeneous multiplicativescalings. arXiv preprint arXiv:1506.08230 , 2015.A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In ACM Interna-tional Conference on Multimedia , 2015.John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recogni-tion via sparse representation. IEEE transactions on pattern analysis and machine intelligence(TPAMI) , 2009.Allen Y Yang, Zihan Zhou, Arvind Ganesh Balasubramanian, S Shankar Sastry, and Yi Ma. Fast-minimization algorithms for robust face recognition. IEEE Transactions on Image Processing ,2013.Jianchao Yang, Kai Yu, and Thomas Huang. Supervised translation-invariant sparse coding. InComputer Vision and Pattern Recognition (CVPR) , 2010.Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Robert Fergus. Deconvolutional net-works. In Computer Vision and Pattern Recognition (CVPR) , 2010.Yangmuzi Zhang, Zhuolin Jiang, and Larry S Davis. Discriminative tensor sparse coding for imageclassification. In British Machine Vision Conference (BMVC) , 2013.Ning Zhou, Yi Shen, Jinye Peng, and Jianping Fan. Learning inter-related visual dictionary forobject recognition. In Computer Vision and Pattern Recognition (CVPR) , 2012.11Under review as a conference paper at ICLR 2017APPENDIX AHere we show that spherical sparse coding (SSC) with a norm constraint on the reconstruction isequivalent to standard convolutional sparse coding (CSC). Expanding the least squares reconstruc-tion error and dropping the constant term kxk2gives the CSC problem:maxz2x|KXk=1dkzkkKXk=1dkzkk22KXk=1kzkk1:Let=kPKk=1dkzkk2be the norm of the reconstruction for some code zand let ube thereconstruction scaled to have unit norm so that:u=PKk=1dkzkkPKk=1dkzkk2=KXk=1dkzkwith z=1zWe rewrite the least-squares objective in terms of these new variables:maxz;>0g(z;) = maxz;>02x|ukuk22kzk1= maxz;>02x|u2kzk12Taking the derivative of gw.r.t.yields the optimal scaling as a function of z:(z)=x|u2kzk1:Plugging(z)back intogyields:maxz;>0g(z;) = maxz;kuk2=1x|u2kzk12:Discarding solutions with <0can be achieved by simply dropping the square which results in thefinal constrained problem:arg maxzx|KXk=1dkzk2KXk=1kzkk1s.t.kKXk=1dkzkk21:APPENDIX BWe show in this section that coding in the EB-SSC model can be solved efficiently by a combinationof convolution, shrinkage and projection, steps which can be implemented with standard librarieson a GPU. For convenience, we first rewrite the objective in terms of cross-correlation rather thanconvolution (i.e., , x|(dkzk) = (dk?x)|zk). For ease of understanding, we first consider thecoding problem when there is no classification term.z= arg maxkzk221v|zkzk1;where v= [(d1?x)|;:::; (dK?x)|]|. Pulling the constraint into the objective, we get its La-grangian function:L(z;) =v|zkzk1+1kzk22:From the partial subderivative of the Lagrangian w.r.t. ziwe derive the optimal solution as a functionof; and from that find the conditions in which the solutions hold, giving us:zi()=12(vi v i>0 otherwisevi+ v i<: (17)12Under review as a conference paper at ICLR 2017This can also be compactly written as:z()=12~z; (18)~z=s2vswhere s= sign( z)2 f 1;0;1gjzjands2=ss2 f0;1gjzj. The sign vector of zcanbe determined without knowing , asis a Lagrangian multiplier for an inequality it must be non-negative and therefore does not change the sign of the optimal solution. Lastly, we define the squared`2-norm of ~z, a result that will be used later:k~zk22=~z|(s2v)~z|s=~z|vk~zk1: (19)Substituting z()back into the Lagrangian we get:L(z();) =12v|~z2k~zk1+1142k~zk22;and the derivative w.r.t. is:@L(z()@=122v|~z+22k~zk1+ 1 +142k~zk22:Setting the derivative equal to zero and using the result from Eq. 19, we can find the optimal solutionto:2=12~z|v2k~zk114k~zk22=12k~zk2214k~zk22=)=12k~zk2:Finally, plugging into Eq. 18 we find the optimal solutionz=~zk~zk2: (20)13
B1W_nBOEl
HkNEuToge
ICLR.cc/2017/conference/-/paper609/official/review
{"title": "review", "rating": "5: Marginally below acceptance threshold", "review": "The paper introduces an efficient variant of sparse coding and uses it as a building block in CNNs for image classification. The coding method incorporates both the input signal reconstruction objective as well as top down information from a class label. The proposed block is evaluated against the recently proposed CReLU activation block.\n\nPositives:\nThe proposed method seems technically sound, and it introduces a new way to efficiently train a CNN layer-wise by combining reconstruction and discriminative objectives.\n\nNegatives:\nThe performance gain (in terms of classification accuracy) over the previous state-of-the-art is not clear. Using only one dataset (CIFAR-10), the proposed method performs slightly better than the CRelu baseline, but the improvement is quite small (0.5% in the test set). \n\nThe paper can be strengthened if the authors can demonstrate that the proposed method can be generally applicable to various CNN architectures and datasets with clear and consistent performance gains over strong CNN baselines. Without such results, the practical significance of this work seems unclear.\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Energy-Based Spherical Sparse Coding
["Bailey Kong", "Charless C. Fowlkes"]
In this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors and reconstructions are evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification.
["spherical sparse coding", "spherical sparse", "efficient variant", "convolutional sparse", "reconstructions", "inner product", "cosine distance", "codes", "discriminative classification"]
https://openreview.net/forum?id=HkNEuToge
https://openreview.net/pdf?id=HkNEuToge
https://openreview.net/forum?id=HkNEuToge&noteId=B1W_nBOEl
Under review as a conference paper at ICLR 2017ENERGY -BASED SPHERICAL SPARSE CODINGBailey Kong and Charless C. FowlkesDepartment of Computer ScienceUniversity of California, IrvineIrvine, CA 92697 USAfbhkong,fowlkes g@ics.uci.eduABSTRACTIn this paper, we explore an efficient variant of convolutional sparse coding withunit norm code vectors where reconstruction quality is evaluated using an innerproduct (cosine distance). To use these codes for discriminative classification, wedescribe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) inwhich the hypothesized class label introduces a learned linear bias into the codingstep. We evaluate and visualize performance of stacking this encoder to make adeep layered model for image classification.1 I NTRODUCTIONSparse coding has been widely studied as a representation for images, audio and other vectorial data.This has been a highly successful method that has found its way into many applications, from signalcompression and denoising (Donoho, 2006; Elad & Aharon, 2006) to image classification (Wrightet al., 2009), to modeling neuronal receptive fields in visual cortex (Olshausen & Field, 1997). Sinceits introduction, subsequent works have brought sparse coding into the supervised learning settingby introducing classification loss terms to the original formulation to encourage features that are notonly able to reconstruct the original signal but are also discriminative (Jiang et al., 2011; Yang et al.,2010; Zeiler et al., 2010; Ji et al., 2011; Zhou et al., 2012; Zhang et al., 2013).While supervised sparse coding methods have been shown to find more discriminative features lead-ing to improved classification performance over their unsupervised counterparts, they have receivedmuch less attention in recent years and have been eclipsed by simpler feed-forward architectures.This is in part because sparse coding is computationally expensive. Convex formulations of sparsecoding typically consist of a minimization problem over an objective that includes a least-squares(LSQ) reconstruction error term plus a sparsity inducing regularizer.Because there is no closed-form solution to this formulation, various iterative optimization tech-niques are generally used to find a solution (Zeiler et al., 2010; Bristow et al., 2013; Yang et al.,2013; Heide et al., 2015). In applications where an approximate solution suffices, there is workthat learns non-linear predictors to estimate sparse codes rather than solve the objective more di-rectly (Gregor & LeCun, 2010). The computational overhead for iterative schemes becomes quitesignificant when training discriminative models due to the demand of processing many training ex-amples necessary for good performance, and so sparse coding has fallen out of favor by not beingable to keep up with simpler non-iterative coding methods.In this paper we introduce an alternate formulation of sparse coding using unit length codes anda reconstruction loss based on the cosine similarity. Optimal sparse codes in this model can becomputed in a non-iterative fashion and the coding objective lends itself naturally to embedding ina discriminative, energy-based classifier which we term energy-based spherical sparse coding (EB-SSC) . This bi-directional coding method incorporates both top-down and bottom-up informationwhere the features representation depends on both a hypothesized class label and the input signal.Like Cao et al. (2015), our motivation for bi-directional coding comes from the “Biased CompetitionTheory”, which suggests that visual processing can be biased by other mental processes (e.g., top-down influence) to prioritize certain features that are most relevant to current task. Fig. 1 illustratesthe flow of computation used by our SSC and EB-SSC building blocks compared to a standardfeed-forward layer.1Under review as a conference paper at ICLR 2017Our energy based approach for combining top-down and bottom-up information is closely tied tothe ideas of Larochelle & Bengio (2008); Ji et al. (2011); Zhang et al. (2013); Li & Guo (2014)—although the model details are substantially different (e.g., Ji et al. (2011) and Zhang et al. (2013)use sigmoid non-linearities while Li & Guo (2014) use separate representations for top-down andbottom-up information). The energy function of Larochelle & Bengio (2008) is also similar butincludes an extra classification term and is trained as a restricted Boltzmann machine.ReLUNeg. ReLUConcatenation Convolution x(a) CReLUReLUNeg. ReLUConcatenation Normalization Convolution x (b) SSCyReLUNeg. ReLUNeg. Class BiasPos. Class BiasNormalization Concatenation Convolution x (c) EB-SSCFigure 1: Building blocks for coding networks explored in this paper. Our coding model usesnon-linearities that are closely related to the standard ReLU activation function. (a) Keeping bothpositive and negative activations provides a baseline feed-forward model termed concatenated ReLU(CReLU). (b) Our spherical sparse coding layer has a similar structure but with an extra bias andnormalization step. Our proposed energy-based model uses (c) energy-based spherical sparse coding(EB-SSC) blocks that produces sparse activations which are not only positive and negative, but areclass-specific. These blocks can be stacked to build deeper architectures.1.1 N OTATIONMatrices are denoted as uppercase bold (e.g., A), vectors are lowercase bold (e.g., a), and scalarsare lowercase (e.g., a). We denote the transpose operator with|, the element-wise multiplicationoperator with, the convolution operator with , and the cross-correlation operator with ?. For vec-tors where we dropped the subscript k(e.g.,dandz), we refer to a super vector with Kcomponentsstacked together (e.g., z= [z|1;:::;z|K]|).2 E NERGY -BASED SPHERICAL SPARSE CODINGEnergy-based models capture dependencies between variables using an energy function that measurethe compatibility of the configuration of variables (LeCun et al., 2006). To measure the compatibilitybetween the top-down and bottom-up information, we define the energy function of EB-SSC to bethe sum of bottom-up coding term and a top-down classification term:E(x;y;z) =Ecode(x;z) +Eclass(y;z): (1)The bottom-up information (input signal x) and the top-down information (class label y) are tiedtogether by a latent feature map z.2.1 B OTTOM -UPRECONSTRUCTIONTo measure the compatibility between the input signal xand the latent feature maps z, we introducea novel variant of sparse coding that is amenable to efficient feed-forward optimization. While theidea behind this variant can be applied to either patch-based or convolutional sparse coding, wespecifically use the convolutional variant that shares the burden of coding an image among nearbyoverlapping dictionary elements. Using such a shift-invariant approach avoids the need to learn dic-tionary elements which are simply translated copies of each other, freeing up resources to discovermore diverse and specific filters (see Kavukcuoglu et al. (2010)).2Under review as a conference paper at ICLR 2017Convolutional sparse coding (CSC) attempts to find a set of dictionary elements fd1;:::;dKgandcorresponding sparse codes fz1;:::;zKgso that the resulting reconstruction, r=PKk=1dkzkaccurately represents the input signal x. This is traditionally framed as a least-squares minimizationwith a sparsity inducing prior on z:arg minzkxKXk=1dkzkk22+kzk1: (2)Unlike standard feed-forward CNN models that convolve the input signal xwith the filters, thisenergy function corresponds to a generative model where the latent feature maps fz1;:::;zKgareconvolved with the filters and compared to the input signal (Bristow et al., 2013; Heide et al., 2015;Zeiler et al., 2010).To motivate our novel variant of CSC, consider expanding the squared reconstruction error kxrk22=kxk222x|r+krk22. If we constrain the reconstruction rto have unit norm, the recon-struction error depends entirely on the inner product between xandrand is equivalent to the cosinesimilarity (up to additive and multiplicative constants). This suggests the closely related unit-lengthreconstruction problem:arg maxzx|KXk=1dkzkkzk1 (3)s.t.KXk=1dkzk21In Appendix A we show that, given an optimal unit length reconstruction rwith correspondingcodes z, the solution to the least squares reconstruction problem (Eq. 2) can be computed by asimple scaling r= (x|r2kzk1)r.The unit-length reconstruction problem is no easier than the original least-squares optimization dueto the constraint on the reconstruction which couples the codes for different filters. Instead considera simplified constraint on zwhich we refer to as spherical sparse coding (SSC) :arg maxkzk21Ecode(x;z) = arg maxkzk21x|KXk=1dkzkkzk1: (4)In 2.3 below, we show that the solution to this problem can be found very efficiently without requir-ing iterative optimization.This problem is a relaxation of convolutional sparse coding since it ignores non-orthogonal inter-actions between the dictionary elements1. Alternately, assuming unit norm dictionary elements, thecode norm constraint can be used to upper-bound the reconstruction length. We have by the triangleand Young’s inequality that:Xkdkzk2Xkkdkzkk2Xkkdkk1kzkk1DXkkzkk2 (5)where the factor Dis the dimension of zkand arises from switching from the 1-norm to the 2-norm.SinceDPkkzkk21is a tighter constraint we havemaxkPkdkzkk21Ecode(x;z) maxPkkzkk21DEcode(x;z) (6)However, this relaxation is very loose, primarily due to the triangle inequality. Except in specialcases (e.g., if the dictionary elements have disjoint spectra) the SSC codes will be quite differentfrom the standard least-squares reconstruction.1We note that our formulation is also closely related to the dynamical model suggested by Rozell et al.(2008), but without the dictionary-dependent lateral inhibition between feature maps. Lateral inhibition cansolve the unit-length reconstruction formulation of standard sparse coding but requires iterative optimization.3Under review as a conference paper at ICLR 20172.2 T OP-DOWN CLASSIFICATIONTo measure the compatibility between the class label yand the latent feature maps z, we use a setof one-vs-all linear classifiers. To provide more flexibility, we generalize this by splitting the codevector into positive and negative components:zk=z+k+zkz+k0zk0and allow the linear classifier to operate on each component separately. We express the classifierscore for a hypothesized class label yby:Eclass(y;z) =KXk=1w+|yz+k+KXk=1w|yzk: (7)The classifier thus is parameterized by a pair of weight vectors ( w+ykandwyk) for each class labelyandk-th channel of the latent feature map.This splitting, sometimes referred to as full-wave rectification, is useful since a dictionary elementand its negative do not necessarily have opposite visual semantics. This splitting also allows theclassifier the flexibility to assign distinct meanings or alternately be completely invariant to contrastreversal depending on the problem domain. For example, Shang et al. (2016) found CNN modelswith ReLU non-linearities which discard the negative activations tend to learn pairs of filters whichare related by negation. Keeping both positive and negative responses allowed them to halve thenumber of dictionary elements.We note that it is also straightforward to introduce spatial average pooling prior to classification byintroducing a fixed linear operator Pused to pool the codes (e.g., w+|yPz+k). This is motivated bya variety of hand-engineered feature extractors and sparse coding models, such as Ren & Ramanan(2013), which use spatially pooled histograms of sparse codes for classification. This fixed poolingcan be viewed as a form of regularization on the linear classifier which enforces shared weights overspatial blocks of the latent feature map. Splitting is also quite important to prevent information losswhen performing additive pooling since positive and negative components of zkcan cancel eachother out.2.3 C ODINGBottom-up reconstruction and top-down classification each provide half of the story, coupled by thelatent feature maps. For a given input xand hypothesized class y, we would like to find the optimalactivations zthat maximize the joint energy function E(x;y;z). This requires solving the followingoptimization:arg maxkzk21x|KXk=1dkzkkzk1+KXk=1w+|ykz+k+KXk=1w|ykzk; (8)where x2RDis an image and y2Y is a class hypothesis. zk2RFis thek-th componentlatent variable being inferred; z+kandzkare the positive and negative coefficients of zk, such thatzk=z+k+zk. The parameters dk2RM,w+yk2RF, andwyk2RFare the dictionary filter,positive coefficient classifier, and negative coefficient classifier for the k-th component respectively.A key aspect of our formulation is that the optimal codes can be found very efficiently in closed-form—in a feed-forward manner (see Appendix B for a detailed argument).2.3.1 A SYMMETRIC SHRINKAGETo describe the coding processes, let us first define a generalized version of the shrinkage functioncommonly used in sparse coding. Our asymmetric shrinkage is parameterized by upper and lowerthresholds+shrink (+;)(v) =8<:v+ifv+>00 otherwisev+ifv+<0(9)4Under review as a conference paper at ICLR 2017(a)0+(b)0 +(c)+0 (d)0 +Figure 2: Comparing the behavior of asymmetric shrinkage for different settings of +and.(a)-(c) satisfy the condition that +while (d) does not.Fig. 2 shows a visualization of this function which generalizes the standard shrinkage proximaloperator by allowing for the positive and negative thresholds. In particular, it corresponds to theproximal operator for a version of the `1-norm that penalizes the positive and negative componentswith different weights jvjasym =+kv+k1+kvk1. The standard shrink operator correspondstoshrink (;)(v)while the rectified linear unit common in CNNs is given by a limiting caseshrink (0;1)(v). We note that+is required for shrink (+;)to be a proper function(see Fig. 2).2.3.2 F EED-FORWARD CODINGWe now describe how codes can be computed in a simple feed-forward pass. Let+yk=w+yk;yk=wyk(10)be vectors of positive and negative biases whose entries are associated with a spatial location in thefeature map kfor classy. The optimal code zcan be computed in three sequential steps:1. Cross-correlate the data with the filterbank dk?x2. Apply an asymmetric version of the standard shrinkage operator~zk= shrink(+yk;yk)(dk?x) (11)where, with abuse of notation, we allow the shrinkage function (Eq. 9) to apply entriesin the vectors of threshold parameter pairs +yk;ykto the corresponding elements of theargument.3. Project onto the feasible set of unit length codesz=~zk~zk2: (12)2.3.3 R ELATIONSHIP TO CNN S:We note that this formulation of coding has a close connection to single layer convolutional neuralnetwork (CNN). A typical CNN layer consists of convolution with a filterbank followed by a non-linear activation such as a rectified linear unit (ReLU). ReLUs can be viewed as another way ofinducing sparsity, but rather than coring the values around zero like the shrink function, ReLUtruncates negative values. On the other hand, the asymmetric shrink function can be viewed as thesum of two ReLUs applied to appropriately biased inputs:shrink (+;)(x) = ReLU(x+)ReLU((x+));SSC coding can thus be seen as a CNN in which the ReLU activation has been replaced with shrink-age followed by a global normalization.5Under review as a conference paper at ICLR 20173 L EARNINGWe formulate supervised learning using the softmax log-loss that maximizes the energy for the trueclass labelyiwhile minimizing energy of incorrect labels y.arg mind;w+;w;02(kw+k22+kwk22+kdk22)+1NNXi=1[maxkzk21E(xi;yi;z) + logXy2Ymaxkzk21eE(xi;y;z)]s.t.(wyk)(w+yk)8y;k; (13)whereis the hyperparameter regularizing w+y,wy, andd. We constrain the relationship betweenand the entries of w+yandwyin order for the asymmetric shrinkage to be a proper function (seeSec. 2.3.1 and Appendix B for details).In classical sparse coding, it is typical to constrain the `2-norm of each dictionary filter to unit length.Our spherical coding objective behaves similarly. For any optimal code z, there is a 1-dimensionalsubspace of parameters for which zis optimal given by scaling dinversely to w,. For simplicityof the implementation, we opt to regularize dto assure a unique solution. However, as Tygert et al.(2015) point out, it may be advantageous from the perspective of optimization to explicitly constrainthe norm of the filter bank.Note that unlike classical sparse coding, where is a hyperparameter that is usually set using cross-validation, we treat it as a parameter of the model that is learned to maximize performance.3.1 O PTIMIZATIONIn order to solve Eq. 13, we explicitly formulate our model as a directed-acyclic-graph (DAG) neuralnetwork with shared weights, where the forward-pass computes the sparse code vectors and thebackward-pass updates the parameter weights. We optimize the objective using stochastic gradientdescent (SGD).As mentioned in Sec. 2.3 shrinkage function is assymetric with parameters +ykorykas definedin Eq. 10. However, the inequality constraint on their relationship to keep the shrinkage function aproper function is difficult to enforce when optimizing with SGD. Instead, we introduce a centraloffset parameter and reduce the ordering constraint to pair of positivity constraints. Let^w+yk=+ykbk ^wyk=yk+bk (14)be the modified linear “classifiers” relative to the central offset bk. It is straightforward to see thatif+ykandykthat satisfy the constrain in Eq. 13, then adding the same value to both sides ofthe inequality will not change that. However, taking bkto be a midpoint between them, then both+ykbkandyk+bkwill be strictly non-negative.Using this variable substitution, we rewrite the energy function (Eq. 1) asE0(x;y;z) =x|KXk=1dkzk+KXk=1bk1|zkKXk=1^w+|ykz+k+KXk=1^w|ykzk: (15)where bis constant offset for each code channel. The modified linear “classification” terms nowtake on a dual role of inducing sparsity and measuring the compatibility between zandy.This yields a modified learning objective that can easily be solved with existing implementations forlearning convolutional neural nets:arg mind;^w+;^w;b2(k^w+k22+k^wk22+kdk22)+1NNXi=1[maxkzk21E0(xi;yi;z) + logXy2Ymaxkzk21eE0(xi;y;z)]s.t.^w+yk;^wyk08y;k; (16)6Under review as a conference paper at ICLR 2017where ^w+and^ware the new sparsity inducing classifiers, and bare the arbitrary origin points. Inparticular, adding the Korigin points allows us to enforce the constraint by simply projecting ^w+and^wonto the positive orthant during SGD.3.1.1 S TACKING BLOCKSWe also examine stacking multiple blocks of our energy function in order to build a hierarchicalrepresentation. As mentioned in Sec. 3.1.1, the optimal codes can be computed in a simple feed-forward pass—this applies to shallow versions of our model. When stacking multiple blocks of ourenergy-based model, solving for the optimal codes cannot be done in a feed-forward pass since thecodes for different blocks are coupled (bilinearly) in the joint objective. Instead, we can proceedin an iterative manner, performing block-coordinate descent by repeatedly passing up and down thehierarchy updating the codes. In this section we investigate the trade-off between the number ofpasses used to find the optimal codes for the stacked model and classification performance.For this purpose, we train multiple instances of a 2-block version of our energy-based model thatdiffer in the number of iterations used when solving for the codes. For recurrent networks such asthis, inference is commonly implemented by “unrolling” the network, where the parts of the net-work structure are repeated with parameters shared across these repeated parts to mimic an iterativealgorithm that stops at a fixed number of iterations rather than at some convergence criteria.0 10 20 30 40 50epoch10-310-210-1100101train objective (log-scale)not unrolledunrolled 1unrolled 2unrolled 3unrolled 4(a) Train Objective0 10 20 30 40 50epoch00.020.040.060.080.10.12test errornot unrolledunrolled 1unrolled 2unrolled 3unrolled 4 (b) Test ErrorFigure 3: Comparing the effects of unrolling a 2-block version of our energy-based model. (Bestviewed in color.)In Fig. 3, we compare the performance between models that were unrolled zero to four times. Wesee that there is a difference in performance based on how many sweeps of the variables are made.In terms of the training objective, more unrolling produces models that have lower objective valueswith convergence after only a few passes. In terms of testing error, however, we see that full codeinference is not necessarily better, as unrolling once or twice has lower errors than unrolling threeor four times. The biggest difference was between not unrolling and unrolling once, where both thetraining objective and testing error goes down. The testing error decreases from 0.0131 to 0.0074.While there is a clear benefit in terms of performance for unrolling at least once, there is also atrade-off between performance and computational resource, especially for deeper models.4 E XPERIMENTSWe evaluate the benefits of combining top-down and bottom-up information to produce class-specific features on the CIFAR-10 (Krizhevsky & Hinton, 2009) dataset using a deep version ofour EB-SSC. All experiments were performed using MatConvNet (Vedaldi & Lenc, 2015) frame-work with the ADAM optimizer (Kingma & Ba, 2014). The data was preprocessed and augmentedfollowing the procedure in Goodfellow et al. (2013). Specifically, the data was made zero mean andwhitened, augmented with horizontal flips (with a 0.5 probability) and random cropping. No weightdecay was used, but we used a dropout rate of 0:3before every convolution layer except for the first.For these experiments we consider a single forward pass (no unrolling).7Under review as a conference paper at ICLR 2017Base Networkblock kernel, stride, padding activationconv1 33396;1;1 ReLU/CReLUconv2 3396=19296;1;1 ReLU/CReLUpool1 33;2;1 maxconv3 3396=192192;1;1 ReLU/CReLUconv4 33192=384192;1;1ReLU/CReLUconv5 33192=384192;1;1ReLU/CReLUpool2 33;2;1 maxconv6 33192=384192;1;1ReLU/CReLUconv7 11192=384192;1;1ReLU/CReLUTable 1: Underlying block architecture common across all models we evaluated. SSC networksadd an extra normalization layer after the non-linearity. And EB-SSC networks insert class-specificbias layers between the convolution layer and the non-linearity. Concatenated ReLU (CReLU) splitspositive and negative activations into two separate channels rather than discarding the negative com-ponent as in the standard ReLU.4.1 C LASSIFICATIONWe compare our proposed EB-SSC model to that of Springenberg et al. (2015), which uses rectifiedlinear units (ReLU) as its non-linearity. This model can be viewed as a basic feed-forward versionof our proposed model which we take as a baseline. We also consider variants of the baseline modelthat utilize a subset of architectural features of our proposed model (e.g., concatenated rectifiedlinear units (CReLU) and spherical normalization (SN)) to understand how subtle design changes ofthe network architecture affects performance.We describe the model architecture in terms of the feature extractor and classifier. Table 1 shows theoverall network architecture of feature extractors, which consist of seven convolution blocks and twopooling layers. We test two possible classifiers: a simple linear classifier (LC) and our energy-basedclassifier (EBC), and use softmax-loss for all models. For linear classifiers, a numerical subscriptindicates which of the seven conv blocks of the feature extractor is used for classification (e.g., LC 7indicates the activations out of the last conv block is fed into the linear classifier). For energy-basedclassifiers, a numerical subscript indicates which conv blocks of the feature extractor are replacewith a energy-based classifier (e.g., EBC 67indicates the activations out of conv5 is fed into theenergy-based classifier and the energy-based classifier has a similar architecture to the conv blocksit replaces). The notation differ because for energy-based classifiers, the optimal activations are afunction of the hypothesized class label, whereas for linear classifiers, they are not.Model Train Err. (%) Test Err. (%) # paramsReLU+LC 7 1.20 11.40 1.3MCReLU+LC 7 2.09 10.17 2.6MCReLU(SN)+LC 7 0.99 9.74 2.6MSSC+LC 7 0.99 9.77 2.6MSSC+EBC 67 0.21 9.23 3.2MTable 2: Comparison of the baseline ReLU+LC 7model, its derivative models, and our proposedmodel on CIFAR-10.The results shown in Table 2 compare our proposed model to the baselines ReLU+LC 7(Springen-berg et al., 2015) and CReLU+LC 7(Shang et al., 2016), and to intermediate variants. The base-line models all perform very similarly with some small reductions in error rates over the baselineCReLU+LC 7. However, CReLU+LC 7reduces the error rate over ReLU+LC 7by more than onepercent (from 11.40% to 10.17%), which confirms the claims by Shang et al. (2016) and demon-strates the benefits of splitting positive and negative activations. Likewise, we see further decreasein the error rate (to 9.74%) from using spherical normalization. Though normalizing the activationsdoesn’t add any capacity to the model, this improved performance is likely because scale-invariantactivations makes training easier. On the other hand, further sparsifying the activations yielded no8Under review as a conference paper at ICLR 2017benefit. We tested values =f0:001;0:01gand found 0:001to perform better. Replacing the linearclassifier with our energy-based classifier further decreases the error rate by another half percent (to9.23%).4.2 D ECODING CLASS -SPECIFIC CODESA unique aspect of our model is that it is generative in the sense that each layer is explicitly trying toencode the activation pattern in the prior layer. Similar to the work on deconvolutional networks builton least-squares sparse coding (Zeiler et al., 2010), we can synthesize input images from activationsin our spherical coding network by performing repeated deconvolutions (transposed convolutions)back through the network. Since our model is energy based, we can further examine how the top-down information of a hypothesized class effects the intermediate activations.Figure 4: The reconstruction of an airplane image from different levels of the network (rows) acrossdifferent hypothesized class labels (columns). The first column is pure reconstruction, i.e., unbiasedby a hypothesized class label, the remaining columns show reconstructions of the learned class biasat each layer for one of ten possible CIFAR-10 class labels. (Best viewed in color.)The first column in Fig. 4 visualizes reconstructions of a given input image based on activationsfrom different layers of the model by convolution transpose. In this case we put in zeros for classbiases (i.e., no top-down) and are able to recover high fidelity reconstructions of the input. In theremaining columns, we use the same deconvolution pass to construct input space representations ofthe learned classifier biases. At low levels of the feature hierarchy, these biases are spatially smoothsince the receptive fields are small and there is little spatial invariance capture in the activations. Athigher levels these class-conditional bias fields become more tightly localized.Finally, in Fig. 5 we shows decodings from the conv2 and conv5 layer of the EB-SSC model for agiven input under different class hypotheses. Here we subtract out the contribution of the top-downbias term in order to isolate the effect of the class conditioning on the encoding of input features.As visible in the figure, the modulation of the activations focused around particular regions of theimage and the differences across class hypotheses becomes more pronounced at higher layers of thenetwork.5 C ONCLUSIONWe presented an energy-based sparse coding method that efficiently combines cosine similarity,convolutional sparse coding, and linear classification. Our model shows a clear mathematical con-nection between the activation functions used in CNNs to introduce sparsity and our cosine similar-ity convolutional sparse coding formulation. Our proposed model outperforms the baseline modeland we show which attributes of our model contributes most to the increase in performance. Wealso demonstrate that our proposed model provides an interesting framework to probe the effects ofclass-specific coding.REFERENCESHilton Bristow, Anders Eriksson, and Simon Lucey. Fast convolutional sparse coding. In ComputerVision and Pattern Recognition (CVPR) , 2013.9Under review as a conference paper at ICLR 2017(a) conv2 (b) conv5Figure 5: Visualizing the reconstruction of different input images (rows) for each of 10 differentclass hypotheses (cols) from the 2nd and 5th block activations for a model trained on MNIST digitclassification.Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, LiangWang, Chang Huang, Wei Xu, et al. Look and think twice: Capturing top-down visual attentionwith feedback convolutional neural networks. In International Conference on Computer Vision(ICCV) , 2015.David L Donoho. Compressed sensing. IEEE Transactions on information theory , 2006.Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image processing , 2006.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max-out networks. In International conference on Machine learning (ICML) , 2013.Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In InternationalConference on Machine Learning (ICML) , 2010.Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparsecoding. In Computer Vision and Pattern Recognition (CVPR) , 2015.Zhengping Ji, Wentao Huang, G. Kenyon, and L.M.A. Bettencourt. Hierarchical discriminativesparse coding via bidirectional connections. In International Joint Converence on Neural Net-works (IJCNN) , 2011.Zhuolin Jiang, Zhe Lin, and Larry S Davis. Learning a discriminative dictionary for sparse codingvia label consistent K-SVD. In Computer Vision and Pattern Recognition (CVPR) , 2011.Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha ̈el Mathieu, and Yann LCun. Learning convolutional feature hierarchies for visual recognition. In Advances in neuralinformation processing systems (NIPS) , 2010.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International conference on Machine learning (ICML) , 2008.10Under review as a conference paper at ICLR 2017Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-basedlearning. Predicting structured data , 2006.Xin Li and Yuhong Guo. Bi-directional representation learning for multi-label classification. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases (ECMLKDD) . 2014.Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision research , 1997.Xiaofeng Ren and Deva Ramanan. Histograms of sparse codes for object detection. In ComputerVision and Pattern Recognition (CVPR) , 2013.Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. Sparse codingvia thresholding and local competition in neural circuits. Neural computation , 2008.Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improvingconvolutional neural networks via concatenated rectified linear units. In International conferenceon Machine learning (ICML) , 2016.J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for simplicity: Theall convolutional net. In International conference on Learning Representations (ICLR) (workshoptrack) , 2015.Mark Tygert, Arthur Szlam, Soumith Chintala, Marc’Aurelio Ranzato, Yuandong Tian, and Woj-ciech Zaremba. Convolutional networks and learning invariant to homogeneous multiplicativescalings. arXiv preprint arXiv:1506.08230 , 2015.A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In ACM Interna-tional Conference on Multimedia , 2015.John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recogni-tion via sparse representation. IEEE transactions on pattern analysis and machine intelligence(TPAMI) , 2009.Allen Y Yang, Zihan Zhou, Arvind Ganesh Balasubramanian, S Shankar Sastry, and Yi Ma. Fast-minimization algorithms for robust face recognition. IEEE Transactions on Image Processing ,2013.Jianchao Yang, Kai Yu, and Thomas Huang. Supervised translation-invariant sparse coding. InComputer Vision and Pattern Recognition (CVPR) , 2010.Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Robert Fergus. Deconvolutional net-works. In Computer Vision and Pattern Recognition (CVPR) , 2010.Yangmuzi Zhang, Zhuolin Jiang, and Larry S Davis. Discriminative tensor sparse coding for imageclassification. In British Machine Vision Conference (BMVC) , 2013.Ning Zhou, Yi Shen, Jinye Peng, and Jianping Fan. Learning inter-related visual dictionary forobject recognition. In Computer Vision and Pattern Recognition (CVPR) , 2012.11Under review as a conference paper at ICLR 2017APPENDIX AHere we show that spherical sparse coding (SSC) with a norm constraint on the reconstruction isequivalent to standard convolutional sparse coding (CSC). Expanding the least squares reconstruc-tion error and dropping the constant term kxk2gives the CSC problem:maxz2x|KXk=1dkzkkKXk=1dkzkk22KXk=1kzkk1:Let=kPKk=1dkzkk2be the norm of the reconstruction for some code zand let ube thereconstruction scaled to have unit norm so that:u=PKk=1dkzkkPKk=1dkzkk2=KXk=1dkzkwith z=1zWe rewrite the least-squares objective in terms of these new variables:maxz;>0g(z;) = maxz;>02x|ukuk22kzk1= maxz;>02x|u2kzk12Taking the derivative of gw.r.t.yields the optimal scaling as a function of z:(z)=x|u2kzk1:Plugging(z)back intogyields:maxz;>0g(z;) = maxz;kuk2=1x|u2kzk12:Discarding solutions with <0can be achieved by simply dropping the square which results in thefinal constrained problem:arg maxzx|KXk=1dkzk2KXk=1kzkk1s.t.kKXk=1dkzkk21:APPENDIX BWe show in this section that coding in the EB-SSC model can be solved efficiently by a combinationof convolution, shrinkage and projection, steps which can be implemented with standard librarieson a GPU. For convenience, we first rewrite the objective in terms of cross-correlation rather thanconvolution (i.e., , x|(dkzk) = (dk?x)|zk). For ease of understanding, we first consider thecoding problem when there is no classification term.z= arg maxkzk221v|zkzk1;where v= [(d1?x)|;:::; (dK?x)|]|. Pulling the constraint into the objective, we get its La-grangian function:L(z;) =v|zkzk1+1kzk22:From the partial subderivative of the Lagrangian w.r.t. ziwe derive the optimal solution as a functionof; and from that find the conditions in which the solutions hold, giving us:zi()=12(vi v i>0 otherwisevi+ v i<: (17)12Under review as a conference paper at ICLR 2017This can also be compactly written as:z()=12~z; (18)~z=s2vswhere s= sign( z)2 f 1;0;1gjzjands2=ss2 f0;1gjzj. The sign vector of zcanbe determined without knowing , asis a Lagrangian multiplier for an inequality it must be non-negative and therefore does not change the sign of the optimal solution. Lastly, we define the squared`2-norm of ~z, a result that will be used later:k~zk22=~z|(s2v)~z|s=~z|vk~zk1: (19)Substituting z()back into the Lagrangian we get:L(z();) =12v|~z2k~zk1+1142k~zk22;and the derivative w.r.t. is:@L(z()@=122v|~z+22k~zk1+ 1 +142k~zk22:Setting the derivative equal to zero and using the result from Eq. 19, we can find the optimal solutionto:2=12~z|v2k~zk114k~zk22=12k~zk2214k~zk22=)=12k~zk2:Finally, plugging into Eq. 18 we find the optimal solutionz=~zk~zk2: (20)13
BkPyDyfVe
HkNEuToge
ICLR.cc/2017/conference/-/paper609/official/review
{"title": "Review", "rating": "6: Marginally above acceptance threshold", "review": "\nFirst, I'd like to thank the authors for their answers and clarifications.\nI find, the presentation of the multi-stage version of the model much clearer now.\n\nPros:\n\n+ The paper states a sparse coding problem using cosine loss, which allows to solve the problem in a single pass.\n\n+ The energy-based formulation allows bi-directional coding that incorporates top-down and bottom-up information in the feature extraction process. \n\nCons:\n\n+ The cost of running the evaluation could be large in the multi-class setting, rendering the approach less attractive and the computational cost comparable to recurrent architectures.\n\n+ While the model is competitive and improves over the baseline, the paper would be more convincing with other comparisons (see text). The experimental evaluation is limited (a single database and a single baseline)\n\n------\n\nThe motivation of the sparse coding scheme is to perform inference in a feed forward manner. This property does not hold in the multi stage setting, thus optimization would be required (as clarified by the authors).\n\nHaving an efficient way of performing a bi-directional coding scheme is very interesting. As the authors clarified, this could not necessarily be the case, as the model needs to be evaluated many times for performing a classification.\n\nMaybe an interesting combination would be to run the model without any class-specific bias, and evaluation only the top K predictions with the energy-based setting.\n\nHaving said this, it would be good to include a discussion (if not direct comparisons) of the trade-offs of using a model as the one proposed by Cao et al. Eg. computational costs, performance.\n\nUsing the bidirectional coding only on the top layers seems reasonable: one can get a good low level representation in a class agnostic way. This, however could be studied in more detail, for instance showing empirically the trade offs. If I understand correctly, now only one setting is being reported.\n\nFinally, the authors mention that one benefit of using the architecture derived from the proposed coding method is the spherical normalization scheme, which can lead to smoother optimization dynamics. Does the baseline (or model) use batch-normalization? If not, seems relevant to test.\n\n\nMinor comments:\n\nI find figure 2 (d) confusing. I would not plot this setting as it does not lead to a function (as the authors state in the text).\n\n\n", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}
review
2017
ICLR.cc/2017/conference
Energy-Based Spherical Sparse Coding
["Bailey Kong", "Charless C. Fowlkes"]
In this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors and reconstructions are evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification.
["spherical sparse coding", "spherical sparse", "efficient variant", "convolutional sparse", "reconstructions", "inner product", "cosine distance", "codes", "discriminative classification"]
https://openreview.net/forum?id=HkNEuToge
https://openreview.net/pdf?id=HkNEuToge
https://openreview.net/forum?id=HkNEuToge&noteId=BkPyDyfVe
Under review as a conference paper at ICLR 2017ENERGY -BASED SPHERICAL SPARSE CODINGBailey Kong and Charless C. FowlkesDepartment of Computer ScienceUniversity of California, IrvineIrvine, CA 92697 USAfbhkong,fowlkes g@ics.uci.eduABSTRACTIn this paper, we explore an efficient variant of convolutional sparse coding withunit norm code vectors where reconstruction quality is evaluated using an innerproduct (cosine distance). To use these codes for discriminative classification, wedescribe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) inwhich the hypothesized class label introduces a learned linear bias into the codingstep. We evaluate and visualize performance of stacking this encoder to make adeep layered model for image classification.1 I NTRODUCTIONSparse coding has been widely studied as a representation for images, audio and other vectorial data.This has been a highly successful method that has found its way into many applications, from signalcompression and denoising (Donoho, 2006; Elad & Aharon, 2006) to image classification (Wrightet al., 2009), to modeling neuronal receptive fields in visual cortex (Olshausen & Field, 1997). Sinceits introduction, subsequent works have brought sparse coding into the supervised learning settingby introducing classification loss terms to the original formulation to encourage features that are notonly able to reconstruct the original signal but are also discriminative (Jiang et al., 2011; Yang et al.,2010; Zeiler et al., 2010; Ji et al., 2011; Zhou et al., 2012; Zhang et al., 2013).While supervised sparse coding methods have been shown to find more discriminative features lead-ing to improved classification performance over their unsupervised counterparts, they have receivedmuch less attention in recent years and have been eclipsed by simpler feed-forward architectures.This is in part because sparse coding is computationally expensive. Convex formulations of sparsecoding typically consist of a minimization problem over an objective that includes a least-squares(LSQ) reconstruction error term plus a sparsity inducing regularizer.Because there is no closed-form solution to this formulation, various iterative optimization tech-niques are generally used to find a solution (Zeiler et al., 2010; Bristow et al., 2013; Yang et al.,2013; Heide et al., 2015). In applications where an approximate solution suffices, there is workthat learns non-linear predictors to estimate sparse codes rather than solve the objective more di-rectly (Gregor & LeCun, 2010). The computational overhead for iterative schemes becomes quitesignificant when training discriminative models due to the demand of processing many training ex-amples necessary for good performance, and so sparse coding has fallen out of favor by not beingable to keep up with simpler non-iterative coding methods.In this paper we introduce an alternate formulation of sparse coding using unit length codes anda reconstruction loss based on the cosine similarity. Optimal sparse codes in this model can becomputed in a non-iterative fashion and the coding objective lends itself naturally to embedding ina discriminative, energy-based classifier which we term energy-based spherical sparse coding (EB-SSC) . This bi-directional coding method incorporates both top-down and bottom-up informationwhere the features representation depends on both a hypothesized class label and the input signal.Like Cao et al. (2015), our motivation for bi-directional coding comes from the “Biased CompetitionTheory”, which suggests that visual processing can be biased by other mental processes (e.g., top-down influence) to prioritize certain features that are most relevant to current task. Fig. 1 illustratesthe flow of computation used by our SSC and EB-SSC building blocks compared to a standardfeed-forward layer.1Under review as a conference paper at ICLR 2017Our energy based approach for combining top-down and bottom-up information is closely tied tothe ideas of Larochelle & Bengio (2008); Ji et al. (2011); Zhang et al. (2013); Li & Guo (2014)—although the model details are substantially different (e.g., Ji et al. (2011) and Zhang et al. (2013)use sigmoid non-linearities while Li & Guo (2014) use separate representations for top-down andbottom-up information). The energy function of Larochelle & Bengio (2008) is also similar butincludes an extra classification term and is trained as a restricted Boltzmann machine.ReLUNeg. ReLUConcatenation Convolution x(a) CReLUReLUNeg. ReLUConcatenation Normalization Convolution x (b) SSCyReLUNeg. ReLUNeg. Class BiasPos. Class BiasNormalization Concatenation Convolution x (c) EB-SSCFigure 1: Building blocks for coding networks explored in this paper. Our coding model usesnon-linearities that are closely related to the standard ReLU activation function. (a) Keeping bothpositive and negative activations provides a baseline feed-forward model termed concatenated ReLU(CReLU). (b) Our spherical sparse coding layer has a similar structure but with an extra bias andnormalization step. Our proposed energy-based model uses (c) energy-based spherical sparse coding(EB-SSC) blocks that produces sparse activations which are not only positive and negative, but areclass-specific. These blocks can be stacked to build deeper architectures.1.1 N OTATIONMatrices are denoted as uppercase bold (e.g., A), vectors are lowercase bold (e.g., a), and scalarsare lowercase (e.g., a). We denote the transpose operator with|, the element-wise multiplicationoperator with, the convolution operator with , and the cross-correlation operator with ?. For vec-tors where we dropped the subscript k(e.g.,dandz), we refer to a super vector with Kcomponentsstacked together (e.g., z= [z|1;:::;z|K]|).2 E NERGY -BASED SPHERICAL SPARSE CODINGEnergy-based models capture dependencies between variables using an energy function that measurethe compatibility of the configuration of variables (LeCun et al., 2006). To measure the compatibilitybetween the top-down and bottom-up information, we define the energy function of EB-SSC to bethe sum of bottom-up coding term and a top-down classification term:E(x;y;z) =Ecode(x;z) +Eclass(y;z): (1)The bottom-up information (input signal x) and the top-down information (class label y) are tiedtogether by a latent feature map z.2.1 B OTTOM -UPRECONSTRUCTIONTo measure the compatibility between the input signal xand the latent feature maps z, we introducea novel variant of sparse coding that is amenable to efficient feed-forward optimization. While theidea behind this variant can be applied to either patch-based or convolutional sparse coding, wespecifically use the convolutional variant that shares the burden of coding an image among nearbyoverlapping dictionary elements. Using such a shift-invariant approach avoids the need to learn dic-tionary elements which are simply translated copies of each other, freeing up resources to discovermore diverse and specific filters (see Kavukcuoglu et al. (2010)).2Under review as a conference paper at ICLR 2017Convolutional sparse coding (CSC) attempts to find a set of dictionary elements fd1;:::;dKgandcorresponding sparse codes fz1;:::;zKgso that the resulting reconstruction, r=PKk=1dkzkaccurately represents the input signal x. This is traditionally framed as a least-squares minimizationwith a sparsity inducing prior on z:arg minzkxKXk=1dkzkk22+kzk1: (2)Unlike standard feed-forward CNN models that convolve the input signal xwith the filters, thisenergy function corresponds to a generative model where the latent feature maps fz1;:::;zKgareconvolved with the filters and compared to the input signal (Bristow et al., 2013; Heide et al., 2015;Zeiler et al., 2010).To motivate our novel variant of CSC, consider expanding the squared reconstruction error kxrk22=kxk222x|r+krk22. If we constrain the reconstruction rto have unit norm, the recon-struction error depends entirely on the inner product between xandrand is equivalent to the cosinesimilarity (up to additive and multiplicative constants). This suggests the closely related unit-lengthreconstruction problem:arg maxzx|KXk=1dkzkkzk1 (3)s.t.KXk=1dkzk21In Appendix A we show that, given an optimal unit length reconstruction rwith correspondingcodes z, the solution to the least squares reconstruction problem (Eq. 2) can be computed by asimple scaling r= (x|r2kzk1)r.The unit-length reconstruction problem is no easier than the original least-squares optimization dueto the constraint on the reconstruction which couples the codes for different filters. Instead considera simplified constraint on zwhich we refer to as spherical sparse coding (SSC) :arg maxkzk21Ecode(x;z) = arg maxkzk21x|KXk=1dkzkkzk1: (4)In 2.3 below, we show that the solution to this problem can be found very efficiently without requir-ing iterative optimization.This problem is a relaxation of convolutional sparse coding since it ignores non-orthogonal inter-actions between the dictionary elements1. Alternately, assuming unit norm dictionary elements, thecode norm constraint can be used to upper-bound the reconstruction length. We have by the triangleand Young’s inequality that:Xkdkzk2Xkkdkzkk2Xkkdkk1kzkk1DXkkzkk2 (5)where the factor Dis the dimension of zkand arises from switching from the 1-norm to the 2-norm.SinceDPkkzkk21is a tighter constraint we havemaxkPkdkzkk21Ecode(x;z) maxPkkzkk21DEcode(x;z) (6)However, this relaxation is very loose, primarily due to the triangle inequality. Except in specialcases (e.g., if the dictionary elements have disjoint spectra) the SSC codes will be quite differentfrom the standard least-squares reconstruction.1We note that our formulation is also closely related to the dynamical model suggested by Rozell et al.(2008), but without the dictionary-dependent lateral inhibition between feature maps. Lateral inhibition cansolve the unit-length reconstruction formulation of standard sparse coding but requires iterative optimization.3Under review as a conference paper at ICLR 20172.2 T OP-DOWN CLASSIFICATIONTo measure the compatibility between the class label yand the latent feature maps z, we use a setof one-vs-all linear classifiers. To provide more flexibility, we generalize this by splitting the codevector into positive and negative components:zk=z+k+zkz+k0zk0and allow the linear classifier to operate on each component separately. We express the classifierscore for a hypothesized class label yby:Eclass(y;z) =KXk=1w+|yz+k+KXk=1w|yzk: (7)The classifier thus is parameterized by a pair of weight vectors ( w+ykandwyk) for each class labelyandk-th channel of the latent feature map.This splitting, sometimes referred to as full-wave rectification, is useful since a dictionary elementand its negative do not necessarily have opposite visual semantics. This splitting also allows theclassifier the flexibility to assign distinct meanings or alternately be completely invariant to contrastreversal depending on the problem domain. For example, Shang et al. (2016) found CNN modelswith ReLU non-linearities which discard the negative activations tend to learn pairs of filters whichare related by negation. Keeping both positive and negative responses allowed them to halve thenumber of dictionary elements.We note that it is also straightforward to introduce spatial average pooling prior to classification byintroducing a fixed linear operator Pused to pool the codes (e.g., w+|yPz+k). This is motivated bya variety of hand-engineered feature extractors and sparse coding models, such as Ren & Ramanan(2013), which use spatially pooled histograms of sparse codes for classification. This fixed poolingcan be viewed as a form of regularization on the linear classifier which enforces shared weights overspatial blocks of the latent feature map. Splitting is also quite important to prevent information losswhen performing additive pooling since positive and negative components of zkcan cancel eachother out.2.3 C ODINGBottom-up reconstruction and top-down classification each provide half of the story, coupled by thelatent feature maps. For a given input xand hypothesized class y, we would like to find the optimalactivations zthat maximize the joint energy function E(x;y;z). This requires solving the followingoptimization:arg maxkzk21x|KXk=1dkzkkzk1+KXk=1w+|ykz+k+KXk=1w|ykzk; (8)where x2RDis an image and y2Y is a class hypothesis. zk2RFis thek-th componentlatent variable being inferred; z+kandzkare the positive and negative coefficients of zk, such thatzk=z+k+zk. The parameters dk2RM,w+yk2RF, andwyk2RFare the dictionary filter,positive coefficient classifier, and negative coefficient classifier for the k-th component respectively.A key aspect of our formulation is that the optimal codes can be found very efficiently in closed-form—in a feed-forward manner (see Appendix B for a detailed argument).2.3.1 A SYMMETRIC SHRINKAGETo describe the coding processes, let us first define a generalized version of the shrinkage functioncommonly used in sparse coding. Our asymmetric shrinkage is parameterized by upper and lowerthresholds+shrink (+;)(v) =8<:v+ifv+>00 otherwisev+ifv+<0(9)4Under review as a conference paper at ICLR 2017(a)0+(b)0 +(c)+0 (d)0 +Figure 2: Comparing the behavior of asymmetric shrinkage for different settings of +and.(a)-(c) satisfy the condition that +while (d) does not.Fig. 2 shows a visualization of this function which generalizes the standard shrinkage proximaloperator by allowing for the positive and negative thresholds. In particular, it corresponds to theproximal operator for a version of the `1-norm that penalizes the positive and negative componentswith different weights jvjasym =+kv+k1+kvk1. The standard shrink operator correspondstoshrink (;)(v)while the rectified linear unit common in CNNs is given by a limiting caseshrink (0;1)(v). We note that+is required for shrink (+;)to be a proper function(see Fig. 2).2.3.2 F EED-FORWARD CODINGWe now describe how codes can be computed in a simple feed-forward pass. Let+yk=w+yk;yk=wyk(10)be vectors of positive and negative biases whose entries are associated with a spatial location in thefeature map kfor classy. The optimal code zcan be computed in three sequential steps:1. Cross-correlate the data with the filterbank dk?x2. Apply an asymmetric version of the standard shrinkage operator~zk= shrink(+yk;yk)(dk?x) (11)where, with abuse of notation, we allow the shrinkage function (Eq. 9) to apply entriesin the vectors of threshold parameter pairs +yk;ykto the corresponding elements of theargument.3. Project onto the feasible set of unit length codesz=~zk~zk2: (12)2.3.3 R ELATIONSHIP TO CNN S:We note that this formulation of coding has a close connection to single layer convolutional neuralnetwork (CNN). A typical CNN layer consists of convolution with a filterbank followed by a non-linear activation such as a rectified linear unit (ReLU). ReLUs can be viewed as another way ofinducing sparsity, but rather than coring the values around zero like the shrink function, ReLUtruncates negative values. On the other hand, the asymmetric shrink function can be viewed as thesum of two ReLUs applied to appropriately biased inputs:shrink (+;)(x) = ReLU(x+)ReLU((x+));SSC coding can thus be seen as a CNN in which the ReLU activation has been replaced with shrink-age followed by a global normalization.5Under review as a conference paper at ICLR 20173 L EARNINGWe formulate supervised learning using the softmax log-loss that maximizes the energy for the trueclass labelyiwhile minimizing energy of incorrect labels y.arg mind;w+;w;02(kw+k22+kwk22+kdk22)+1NNXi=1[maxkzk21E(xi;yi;z) + logXy2Ymaxkzk21eE(xi;y;z)]s.t.(wyk)(w+yk)8y;k; (13)whereis the hyperparameter regularizing w+y,wy, andd. We constrain the relationship betweenand the entries of w+yandwyin order for the asymmetric shrinkage to be a proper function (seeSec. 2.3.1 and Appendix B for details).In classical sparse coding, it is typical to constrain the `2-norm of each dictionary filter to unit length.Our spherical coding objective behaves similarly. For any optimal code z, there is a 1-dimensionalsubspace of parameters for which zis optimal given by scaling dinversely to w,. For simplicityof the implementation, we opt to regularize dto assure a unique solution. However, as Tygert et al.(2015) point out, it may be advantageous from the perspective of optimization to explicitly constrainthe norm of the filter bank.Note that unlike classical sparse coding, where is a hyperparameter that is usually set using cross-validation, we treat it as a parameter of the model that is learned to maximize performance.3.1 O PTIMIZATIONIn order to solve Eq. 13, we explicitly formulate our model as a directed-acyclic-graph (DAG) neuralnetwork with shared weights, where the forward-pass computes the sparse code vectors and thebackward-pass updates the parameter weights. We optimize the objective using stochastic gradientdescent (SGD).As mentioned in Sec. 2.3 shrinkage function is assymetric with parameters +ykorykas definedin Eq. 10. However, the inequality constraint on their relationship to keep the shrinkage function aproper function is difficult to enforce when optimizing with SGD. Instead, we introduce a centraloffset parameter and reduce the ordering constraint to pair of positivity constraints. Let^w+yk=+ykbk ^wyk=yk+bk (14)be the modified linear “classifiers” relative to the central offset bk. It is straightforward to see thatif+ykandykthat satisfy the constrain in Eq. 13, then adding the same value to both sides ofthe inequality will not change that. However, taking bkto be a midpoint between them, then both+ykbkandyk+bkwill be strictly non-negative.Using this variable substitution, we rewrite the energy function (Eq. 1) asE0(x;y;z) =x|KXk=1dkzk+KXk=1bk1|zkKXk=1^w+|ykz+k+KXk=1^w|ykzk: (15)where bis constant offset for each code channel. The modified linear “classification” terms nowtake on a dual role of inducing sparsity and measuring the compatibility between zandy.This yields a modified learning objective that can easily be solved with existing implementations forlearning convolutional neural nets:arg mind;^w+;^w;b2(k^w+k22+k^wk22+kdk22)+1NNXi=1[maxkzk21E0(xi;yi;z) + logXy2Ymaxkzk21eE0(xi;y;z)]s.t.^w+yk;^wyk08y;k; (16)6Under review as a conference paper at ICLR 2017where ^w+and^ware the new sparsity inducing classifiers, and bare the arbitrary origin points. Inparticular, adding the Korigin points allows us to enforce the constraint by simply projecting ^w+and^wonto the positive orthant during SGD.3.1.1 S TACKING BLOCKSWe also examine stacking multiple blocks of our energy function in order to build a hierarchicalrepresentation. As mentioned in Sec. 3.1.1, the optimal codes can be computed in a simple feed-forward pass—this applies to shallow versions of our model. When stacking multiple blocks of ourenergy-based model, solving for the optimal codes cannot be done in a feed-forward pass since thecodes for different blocks are coupled (bilinearly) in the joint objective. Instead, we can proceedin an iterative manner, performing block-coordinate descent by repeatedly passing up and down thehierarchy updating the codes. In this section we investigate the trade-off between the number ofpasses used to find the optimal codes for the stacked model and classification performance.For this purpose, we train multiple instances of a 2-block version of our energy-based model thatdiffer in the number of iterations used when solving for the codes. For recurrent networks such asthis, inference is commonly implemented by “unrolling” the network, where the parts of the net-work structure are repeated with parameters shared across these repeated parts to mimic an iterativealgorithm that stops at a fixed number of iterations rather than at some convergence criteria.0 10 20 30 40 50epoch10-310-210-1100101train objective (log-scale)not unrolledunrolled 1unrolled 2unrolled 3unrolled 4(a) Train Objective0 10 20 30 40 50epoch00.020.040.060.080.10.12test errornot unrolledunrolled 1unrolled 2unrolled 3unrolled 4 (b) Test ErrorFigure 3: Comparing the effects of unrolling a 2-block version of our energy-based model. (Bestviewed in color.)In Fig. 3, we compare the performance between models that were unrolled zero to four times. Wesee that there is a difference in performance based on how many sweeps of the variables are made.In terms of the training objective, more unrolling produces models that have lower objective valueswith convergence after only a few passes. In terms of testing error, however, we see that full codeinference is not necessarily better, as unrolling once or twice has lower errors than unrolling threeor four times. The biggest difference was between not unrolling and unrolling once, where both thetraining objective and testing error goes down. The testing error decreases from 0.0131 to 0.0074.While there is a clear benefit in terms of performance for unrolling at least once, there is also atrade-off between performance and computational resource, especially for deeper models.4 E XPERIMENTSWe evaluate the benefits of combining top-down and bottom-up information to produce class-specific features on the CIFAR-10 (Krizhevsky & Hinton, 2009) dataset using a deep version ofour EB-SSC. All experiments were performed using MatConvNet (Vedaldi & Lenc, 2015) frame-work with the ADAM optimizer (Kingma & Ba, 2014). The data was preprocessed and augmentedfollowing the procedure in Goodfellow et al. (2013). Specifically, the data was made zero mean andwhitened, augmented with horizontal flips (with a 0.5 probability) and random cropping. No weightdecay was used, but we used a dropout rate of 0:3before every convolution layer except for the first.For these experiments we consider a single forward pass (no unrolling).7Under review as a conference paper at ICLR 2017Base Networkblock kernel, stride, padding activationconv1 33396;1;1 ReLU/CReLUconv2 3396=19296;1;1 ReLU/CReLUpool1 33;2;1 maxconv3 3396=192192;1;1 ReLU/CReLUconv4 33192=384192;1;1ReLU/CReLUconv5 33192=384192;1;1ReLU/CReLUpool2 33;2;1 maxconv6 33192=384192;1;1ReLU/CReLUconv7 11192=384192;1;1ReLU/CReLUTable 1: Underlying block architecture common across all models we evaluated. SSC networksadd an extra normalization layer after the non-linearity. And EB-SSC networks insert class-specificbias layers between the convolution layer and the non-linearity. Concatenated ReLU (CReLU) splitspositive and negative activations into two separate channels rather than discarding the negative com-ponent as in the standard ReLU.4.1 C LASSIFICATIONWe compare our proposed EB-SSC model to that of Springenberg et al. (2015), which uses rectifiedlinear units (ReLU) as its non-linearity. This model can be viewed as a basic feed-forward versionof our proposed model which we take as a baseline. We also consider variants of the baseline modelthat utilize a subset of architectural features of our proposed model (e.g., concatenated rectifiedlinear units (CReLU) and spherical normalization (SN)) to understand how subtle design changes ofthe network architecture affects performance.We describe the model architecture in terms of the feature extractor and classifier. Table 1 shows theoverall network architecture of feature extractors, which consist of seven convolution blocks and twopooling layers. We test two possible classifiers: a simple linear classifier (LC) and our energy-basedclassifier (EBC), and use softmax-loss for all models. For linear classifiers, a numerical subscriptindicates which of the seven conv blocks of the feature extractor is used for classification (e.g., LC 7indicates the activations out of the last conv block is fed into the linear classifier). For energy-basedclassifiers, a numerical subscript indicates which conv blocks of the feature extractor are replacewith a energy-based classifier (e.g., EBC 67indicates the activations out of conv5 is fed into theenergy-based classifier and the energy-based classifier has a similar architecture to the conv blocksit replaces). The notation differ because for energy-based classifiers, the optimal activations are afunction of the hypothesized class label, whereas for linear classifiers, they are not.Model Train Err. (%) Test Err. (%) # paramsReLU+LC 7 1.20 11.40 1.3MCReLU+LC 7 2.09 10.17 2.6MCReLU(SN)+LC 7 0.99 9.74 2.6MSSC+LC 7 0.99 9.77 2.6MSSC+EBC 67 0.21 9.23 3.2MTable 2: Comparison of the baseline ReLU+LC 7model, its derivative models, and our proposedmodel on CIFAR-10.The results shown in Table 2 compare our proposed model to the baselines ReLU+LC 7(Springen-berg et al., 2015) and CReLU+LC 7(Shang et al., 2016), and to intermediate variants. The base-line models all perform very similarly with some small reductions in error rates over the baselineCReLU+LC 7. However, CReLU+LC 7reduces the error rate over ReLU+LC 7by more than onepercent (from 11.40% to 10.17%), which confirms the claims by Shang et al. (2016) and demon-strates the benefits of splitting positive and negative activations. Likewise, we see further decreasein the error rate (to 9.74%) from using spherical normalization. Though normalizing the activationsdoesn’t add any capacity to the model, this improved performance is likely because scale-invariantactivations makes training easier. On the other hand, further sparsifying the activations yielded no8Under review as a conference paper at ICLR 2017benefit. We tested values =f0:001;0:01gand found 0:001to perform better. Replacing the linearclassifier with our energy-based classifier further decreases the error rate by another half percent (to9.23%).4.2 D ECODING CLASS -SPECIFIC CODESA unique aspect of our model is that it is generative in the sense that each layer is explicitly trying toencode the activation pattern in the prior layer. Similar to the work on deconvolutional networks builton least-squares sparse coding (Zeiler et al., 2010), we can synthesize input images from activationsin our spherical coding network by performing repeated deconvolutions (transposed convolutions)back through the network. Since our model is energy based, we can further examine how the top-down information of a hypothesized class effects the intermediate activations.Figure 4: The reconstruction of an airplane image from different levels of the network (rows) acrossdifferent hypothesized class labels (columns). The first column is pure reconstruction, i.e., unbiasedby a hypothesized class label, the remaining columns show reconstructions of the learned class biasat each layer for one of ten possible CIFAR-10 class labels. (Best viewed in color.)The first column in Fig. 4 visualizes reconstructions of a given input image based on activationsfrom different layers of the model by convolution transpose. In this case we put in zeros for classbiases (i.e., no top-down) and are able to recover high fidelity reconstructions of the input. In theremaining columns, we use the same deconvolution pass to construct input space representations ofthe learned classifier biases. At low levels of the feature hierarchy, these biases are spatially smoothsince the receptive fields are small and there is little spatial invariance capture in the activations. Athigher levels these class-conditional bias fields become more tightly localized.Finally, in Fig. 5 we shows decodings from the conv2 and conv5 layer of the EB-SSC model for agiven input under different class hypotheses. Here we subtract out the contribution of the top-downbias term in order to isolate the effect of the class conditioning on the encoding of input features.As visible in the figure, the modulation of the activations focused around particular regions of theimage and the differences across class hypotheses becomes more pronounced at higher layers of thenetwork.5 C ONCLUSIONWe presented an energy-based sparse coding method that efficiently combines cosine similarity,convolutional sparse coding, and linear classification. Our model shows a clear mathematical con-nection between the activation functions used in CNNs to introduce sparsity and our cosine similar-ity convolutional sparse coding formulation. Our proposed model outperforms the baseline modeland we show which attributes of our model contributes most to the increase in performance. Wealso demonstrate that our proposed model provides an interesting framework to probe the effects ofclass-specific coding.REFERENCESHilton Bristow, Anders Eriksson, and Simon Lucey. Fast convolutional sparse coding. In ComputerVision and Pattern Recognition (CVPR) , 2013.9Under review as a conference paper at ICLR 2017(a) conv2 (b) conv5Figure 5: Visualizing the reconstruction of different input images (rows) for each of 10 differentclass hypotheses (cols) from the 2nd and 5th block activations for a model trained on MNIST digitclassification.Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, LiangWang, Chang Huang, Wei Xu, et al. Look and think twice: Capturing top-down visual attentionwith feedback convolutional neural networks. In International Conference on Computer Vision(ICCV) , 2015.David L Donoho. Compressed sensing. IEEE Transactions on information theory , 2006.Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations overlearned dictionaries. IEEE Transactions on Image processing , 2006.Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max-out networks. In International conference on Machine learning (ICML) , 2013.Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In InternationalConference on Machine Learning (ICML) , 2010.Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparsecoding. In Computer Vision and Pattern Recognition (CVPR) , 2015.Zhengping Ji, Wentao Huang, G. Kenyon, and L.M.A. Bettencourt. Hierarchical discriminativesparse coding via bidirectional connections. In International Joint Converence on Neural Net-works (IJCNN) , 2011.Zhuolin Jiang, Zhe Lin, and Larry S Davis. Learning a discriminative dictionary for sparse codingvia label consistent K-SVD. In Computer Vision and Pattern Recognition (CVPR) , 2011.Koray Kavukcuoglu, Pierre Sermanet, Y-Lan Boureau, Karol Gregor, Micha ̈el Mathieu, and Yann LCun. Learning convolutional feature hierarchies for visual recognition. In Advances in neuralinformation processing systems (NIPS) , 2010.Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma-chines. In International conference on Machine learning (ICML) , 2008.10Under review as a conference paper at ICLR 2017Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-basedlearning. Predicting structured data , 2006.Xin Li and Yuhong Guo. Bi-directional representation learning for multi-label classification. InJoint European Conference on Machine Learning and Knowledge Discovery in Databases (ECMLKDD) . 2014.Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategyemployed by v1? Vision research , 1997.Xiaofeng Ren and Deva Ramanan. Histograms of sparse codes for object detection. In ComputerVision and Pattern Recognition (CVPR) , 2013.Christopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. Sparse codingvia thresholding and local competition in neural circuits. Neural computation , 2008.Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improvingconvolutional neural networks via concatenated rectified linear units. In International conferenceon Machine learning (ICML) , 2016.J Springenberg, Alexey Dosovitskiy, Thomas Brox, and M Riedmiller. Striving for simplicity: Theall convolutional net. In International conference on Learning Representations (ICLR) (workshoptrack) , 2015.Mark Tygert, Arthur Szlam, Soumith Chintala, Marc’Aurelio Ranzato, Yuandong Tian, and Woj-ciech Zaremba. Convolutional networks and learning invariant to homogeneous multiplicativescalings. arXiv preprint arXiv:1506.08230 , 2015.A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In ACM Interna-tional Conference on Multimedia , 2015.John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recogni-tion via sparse representation. IEEE transactions on pattern analysis and machine intelligence(TPAMI) , 2009.Allen Y Yang, Zihan Zhou, Arvind Ganesh Balasubramanian, S Shankar Sastry, and Yi Ma. Fast-minimization algorithms for robust face recognition. IEEE Transactions on Image Processing ,2013.Jianchao Yang, Kai Yu, and Thomas Huang. Supervised translation-invariant sparse coding. InComputer Vision and Pattern Recognition (CVPR) , 2010.Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Robert Fergus. Deconvolutional net-works. In Computer Vision and Pattern Recognition (CVPR) , 2010.Yangmuzi Zhang, Zhuolin Jiang, and Larry S Davis. Discriminative tensor sparse coding for imageclassification. In British Machine Vision Conference (BMVC) , 2013.Ning Zhou, Yi Shen, Jinye Peng, and Jianping Fan. Learning inter-related visual dictionary forobject recognition. In Computer Vision and Pattern Recognition (CVPR) , 2012.11Under review as a conference paper at ICLR 2017APPENDIX AHere we show that spherical sparse coding (SSC) with a norm constraint on the reconstruction isequivalent to standard convolutional sparse coding (CSC). Expanding the least squares reconstruc-tion error and dropping the constant term kxk2gives the CSC problem:maxz2x|KXk=1dkzkkKXk=1dkzkk22KXk=1kzkk1:Let=kPKk=1dkzkk2be the norm of the reconstruction for some code zand let ube thereconstruction scaled to have unit norm so that:u=PKk=1dkzkkPKk=1dkzkk2=KXk=1dkzkwith z=1zWe rewrite the least-squares objective in terms of these new variables:maxz;>0g(z;) = maxz;>02x|ukuk22kzk1= maxz;>02x|u2kzk12Taking the derivative of gw.r.t.yields the optimal scaling as a function of z:(z)=x|u2kzk1:Plugging(z)back intogyields:maxz;>0g(z;) = maxz;kuk2=1x|u2kzk12:Discarding solutions with <0can be achieved by simply dropping the square which results in thefinal constrained problem:arg maxzx|KXk=1dkzk2KXk=1kzkk1s.t.kKXk=1dkzkk21:APPENDIX BWe show in this section that coding in the EB-SSC model can be solved efficiently by a combinationof convolution, shrinkage and projection, steps which can be implemented with standard librarieson a GPU. For convenience, we first rewrite the objective in terms of cross-correlation rather thanconvolution (i.e., , x|(dkzk) = (dk?x)|zk). For ease of understanding, we first consider thecoding problem when there is no classification term.z= arg maxkzk221v|zkzk1;where v= [(d1?x)|;:::; (dK?x)|]|. Pulling the constraint into the objective, we get its La-grangian function:L(z;) =v|zkzk1+1kzk22:From the partial subderivative of the Lagrangian w.r.t. ziwe derive the optimal solution as a functionof; and from that find the conditions in which the solutions hold, giving us:zi()=12(vi v i>0 otherwisevi+ v i<: (17)12Under review as a conference paper at ICLR 2017This can also be compactly written as:z()=12~z; (18)~z=s2vswhere s= sign( z)2 f 1;0;1gjzjands2=ss2 f0;1gjzj. The sign vector of zcanbe determined without knowing , asis a Lagrangian multiplier for an inequality it must be non-negative and therefore does not change the sign of the optimal solution. Lastly, we define the squared`2-norm of ~z, a result that will be used later:k~zk22=~z|(s2v)~z|s=~z|vk~zk1: (19)Substituting z()back into the Lagrangian we get:L(z();) =12v|~z2k~zk1+1142k~zk22;and the derivative w.r.t. is:@L(z()@=122v|~z+22k~zk1+ 1 +142k~zk22:Setting the derivative equal to zero and using the result from Eq. 19, we can find the optimal solutionto:2=12~z|v2k~zk114k~zk22=12k~zk2214k~zk22=)=12k~zk2:Finally, plugging into Eq. 18 we find the optimal solutionz=~zk~zk2: (20)13
SyHDxy8Vl
BJ3filKll
ICLR.cc/2017/conference/-/paper75/official/review
{"title": "review of ``EFFICIENT REPRESENTATION OF LOW-DIMENSIONAL MANIFOLDS USING DEEP NETWORKS''", "rating": "7: Good paper, accept", "review": "SUMMARY \nThis paper discusses how data from a special type of low dimensional structure (monotonic chain) can be efficiently represented in terms of neural networks with two hidden layers. \n\nPROS \nInteresting, easy to follow view on some of the capabilities of neural networks, highlighting the dimensionality reduction aspect, and pointing at possible directions for further investigation. \n\nCONS \nThe paper presents a construction illustrating certain structures that can be captured by a network, but it does not address the learning problem (although it presents experiments where such structures do emerge, more or less). \n\nCOMMENTS \nIt would be interesting to study the ramifications of the presented observations for the case of deep(er) networks. \nAlso, to study to what extent the proposed picture describes the totality of functions that are representable by the networks. \n\nMINOR COMMENTS \n- Figure 1 could be referenced first in the text. \n- ``Color coded'' where the color codes what? \n- Thank you for thinking about revising the points from my first questions. Note: Isometry on the manifold. \n- On page 5, mention how the orthogonal projection on S_k is realized in the network. \n- On page 6 ``divided into segments'' here `segments' is maybe not the best word. \n- On page 6 ``The mean relative error is 0.98'' what is the baseline here, or what does this number mean? \n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}
review
2017
ICLR.cc/2017/conference
Efficient Representation of Low-Dimensional Manifolds using Deep Networks
["Ronen Basri", "David W. Jacobs"]
We consider the ability of deep neural networks to represent data that lies near a low-dimensional manifold in a high-dimensional space. We show that deep networks can efficiently extract the intrinsic, low-dimensional coordinates of such data. Specifically we show that the first two layers of a deep network can exactly embed points lying on a monotonic chain, a special type of piecewise linear manifold, mapping them to a low-dimensional Euclidean space. Remarkably, the network can do this using an almost optimal number of parameters. We also show that this network projects nearby points onto the manifold and then embeds them with little error. Experiments demonstrate that training with stochastic gradient descent can indeed find efficient representations similar to the one presented in this paper.
["Theory", "Deep learning"]
https://openreview.net/forum?id=BJ3filKll
https://openreview.net/pdf?id=BJ3filKll
https://openreview.net/forum?id=BJ3filKll&noteId=SyHDxy8Vl
Published as aconferencepaper at ICLR 2017EFFICIENTREPRESENTATION OFLOW-DIMENSIONALMANIFOLDS USINGDEEPNETWORKSRonen BasriDept. of Computer Science and AppliedMathWeizmann InstituteofScienceRehovot, 76100 Israelronen.basri@weizmann.co.ilDavid W.JacobsDept.of Computer ScienceUniversityof MarylandCollege Park,MDdjacobs@cs.umd.eduABSTRACTWe consider the ability of deep neural networks to represent data that lies near alow-dimensional manifold in a high-dimensional space. We show that deep net-works can efficiently extract the intrinsic, low-dimensional coordinates of suchdata. Specifically we show that the first two layers of a deep network can ex-actly embed points lying on a monotonic chain , a special type of piecewise linearmanifold, mapping them to a low-dimensional Euclidean space. Remarkably, thenetworkcandothisusinganalmostoptimalnumberofparameters. Wealsoshowthat this network projects nearby points onto the manifold and then embeds themwith little error. Experiments demonstrate that training with stochastic gradientdescent can indeed find efficient representations similar to the one presented inthis paper.1 INTRODUCTIONFigure 1:We illustrate the embedding of a manifold by a deep network using the famous Swiss Roll example(left). Dotsrepresentcolorcodedinputdata,withcolorindicatingoneoftheintrinsiccoordinatesofeachinputpoint. In the center, the data is divided into three parts using hidden units represented by the yellow and cyanplanes. Each part is then approximated by a monotonic chain of linear segments. Additional hidden units, alsodepictedasplanes,controltheorientationofthenextsegmentsinthechain. Asecondlayerofthenetworkthenflattens eachchaininto a 2DEuclidean plane, and assembles theseinto acommon2D representation (right).Deepneuralnetworkshaveachievedstate-of-the-artresultsinavarietyoftasks. Onepossiblereasonforthisremarkablesuccessisthattheirhierarchical,layeredstructuremayallowthemtocapturethegeometric regularities of commonplace data. We support this hypothesis by exploring ways thatnetworks can handle input data that lie on or near a low-dimenisonal manifold. In many problems,for example face recognition, data lie on or near manifolds that are of much lower dimension thanthe input space ( Turk & Pentland ,1991;Basri & Jacobs ,2003;Lee et al.,2003), and that representthe intrinsicdegrees of variationin the data.We study the ability of deep networks to represent manifold data. We show that the initial layersof networks can approximate data that lies on high-dimensional manifolds using piecewise linearfunctions, and economically output their coordinates embedded in a low-dimensional Euclideanspace. In fact, each new linear segment approximating the manifold can be represented by a singleadditional hidden unit, leading to a representation of manifold data that in some cases is nearlyoptimal in the number of parameters of the system. Subsequent layers of a deep network could1Published as aconferencepaper at ICLR 2017build upon these early layers, operating in lower dimensional spaces that more naturally representthe input data. We further show empirical results that suggest that training with stochastic gradientdescent canfindefficient representations akinto the one suggested inthis paper.We first show how this embedding can be done efficiently for manifolds consisting of monotonicchainsof linear segments. We then show how these primitives can be combined to form linearapproximationsformorecomplexmanifolds. ThisprocessisillustratedinFigure 1. Wefurthershowthat when the data lies sufficiently close to their linear approximation, the error in the embeddingwill be small. Our constructions will use a feed-forward network with rectified linear unit (RELU)activation. We consider fully connected layers, although the treatment of complex manifolds thataredividedinto pieces(e.g., of monotonicchains) willbe modular,resultingin many zero weights.2 PRIORWORKRealistic learning problems, e.g., in vision and speech processing, involve high dimensional data.Such data is often governed by many fewer variables, producing manifold-like sub-structures in ahigh dimensional ambient space. A large number of dimensionality reduction techniques, such asprinciple component analysis and multi-dimensional scaling ( Duda et al. ,2012), Isomap ( Tenen-baumetal. ,2000),andlocallinearembedding(LLE)( Roweis&Saul ,2000),havebeenintroduced.Anunderlying manifoldassumption ,whichstatesthatdifferentclasseslieinseparatemanifolds,hasalso guided the design of clustering and semi-supervised learning algorithms ( Nadler et al. ,2005;Belkin & Niyogi ,2003;Weston etal. ,2008;Mobahi et al. ,2009).A number of recent papers examine properties of neural nets in light of this manifold assumption.Brahma et al. (2015) show empirically that the layers of deep networks trained with data that liesonamanifoldprogressivelyunfoldthatdataintoEuclideanspaces. Theydonotconsiderthemech-anisms used to perform this unfolding. Rifai et al. (2011) trained a contractive auto-encoder torepresent an atlas of manifold charts. Shaham et al. (2015) demonstrate that a 4-layer network canefficiently represent any function on a manifold through a trapezoidal wavelet decomposition. Inboth, each chart is represented independently, requiring an independent projection for each chart.Likewise, ( Chui & Mhaskar ,2016) consider methods by which a neural network can map pointson a manifold to a low-dimensional, Euclidean space, although they do not consider the efficiencyof this representation in terms of hidden units or weights. We show that for monotonic chains wecan reduce the size of the representation to near optimal by exploiting geometric relations betweenneighboring projection matrices, soanadditional chart requiresonly a single hidden unit.Anotherfamilyofnetworksattempttolearna“semantic”distancemetricfortrainingpairs,oftenbyusingasiamesenetwork( Salakhutdinov&Hinton ,2007;Chopraetal. ,2005;R.Hadsell&LeCun ,2006;Yi et al.,2014;Huang et al. ,2015). These assume that the input space can be mapped non-linearly by a network to produce the desired distances in a lower dimensional feature space. Giryesetal.(2016)showsthatevenafeed-forwardneuralnetworkwithrandomGaussianweightsembedsthe input datain an output space while preservingdistancesbetweeninput items.Anotheroutstandingquestionistowhatextentdeepnetworkscanbemoreefficientthanshallownet-works with a single hidden layer. Shallow networks are universal approximators ( Cybenko,1989).However, recent work demonstrates that deep networks can be exponentially more efficient in rep-resenting certain functions ( Bianchini & Scarselli ,2014;Telgarsky ,2015;Eldan & Shamir ,2015;Delalleau & Bengio ,2011;Montufar et al. ,2014;Cohen et al. ,2015). On the other hand, ( Ba &Caruana,2014) shows empirically that in many practical cases a shallow network can be trained tomimic the behavior of a deep network. Our construction does not produce exponential gains, butdoes show that the early layers of a network can efficiently reduce the dimensionality of data thatfeedsinto laterlayers.3 MONOTONICCHAINS OFLINEARSEGMENTSWeconstructnetworksthatperformdimensionalityreductionondatathatliesonornearamanifold.We focus on feed-forward networks with RELU activation, i.e., max(x,0). Clearly the output ofsuchnetworksarecontinuous,piecewiselinearfunctionsoftheirinput. Itisthereforenaturaltoaskwhether they can embed piecewise-linear manifolds in a low-dimensional Euclidean space both ac-2Published as aconferencepaper at ICLR 2017Figure 2:Left: A continuous chain of linear segments (above) that can beflattened to lie in a single low-dimensional linear subspace (bottom). Right:A monotonic chain. Skdenotes the k’th segment in the chain. Hkis a hyper-planebounding the half-space thatseparates S1, ..., SkfromSk+1, ..., SK.curatelyandefficiently. Inthissectionweconstructsuchefficientnetworksforaclassofmanifoldsthat we call monotonic chains of linear segments , which are defined shortly. These will serve asbuilding blocksfor handlingmore general data that can be decomposed into monotonic chains.Wewillconsiderdatalyinginachainoflinearsegments,denoted C=S1∪...∪SK. EachsegmentSk(1≤k≤K) in the chain is a portion of some m-dimensional affine subspace of Rd, and thesegmentsareconnectedtoformachain(Figure 2). WesupposethateverytwoconsecutivesegmentsSk−1andSkintersect,andthattheintersectionliesinan (m−1)-dimensionalaffinesubspace. Wefurtherassumethatthesechainscanbeflattenedbyisometrysothattheymayberepresentedin Rm.Notethatanycurveon Cwillbemappedtoacurveofthesamelengthin Rmontheflattenedchain.Each unit in the first hidden layer of a neural network will have a response of zero to input pointsthat lie on a hyperplane, defined by its weights and bias term. This hyperplane bounds a half-spacein which the output of the unit is positive; when the output is negative, RELU turns the output tozero. We say that a unit is activeover the half-space in which its output is positive. There is a closeconnectionbetweenthesehyperplanesandtheembeddingofamanifold,whichwebegintodevelopwith the followingdefinition.Definition: We say that a chain of Klinear segments is monotonic (see Figure 2) when there exista set of hyperplanes such that the k’th hyperplane separates the first ksegments from the rest.Denoting the positive half-spaces associated with these hyperplanes as H1, H2, ..., HK−1, then Hkisboundedbyahyperplanethatcontainstheintersectionof SkandSk+1,andSk+1, Sk+2, ..., SK⊂Hkwhile S1, S2, ..., Sk⊂HCk, where HCkis the complement of Hk. We can consider each half-space to represent a hidden unit that is active(i.e., non-zero) over a subset of the regions. With amonotonic chain, the set of active units grows monotonically, so that, (Hk+1∩C)⊆(Hk∩C). Wecan alsodefine some additional units thatareactive overall theregions.Below we show that monotonic chains can be embedded efficiently by networks with two layers ofweights. These networks have dunits in the input layer, a hidden layer with κ=K+m−1unitsthat encodes the structure of the manifold, and an output layer with munits. Denote the weightsin the first layer by a κ×dmatrix Aand further use a bias vector a0∈Rκ. The second layerof weights is captured by a m×κmatrix B. The total number of weights in these two layers is(d+m+ 1)(K+m−1). Thisnetworkmapsapoint x∈Rdtotheembeddingspace Rmthroughu=B[Ax+a0]+where [.]+denotestheRELUoperation. FornowwedonotuseabiasorRELUinthesecondlevel,but those willbeused later when we discuss more complex manifolds.A simple example of a manifold that can be represented efficiently with a neural network occurswhen the data lies in a single m-dimensional affine subspace of Rd. Embedding can be done in thiscasewithjustonelayer,withthematrix Aofsize m×dcontaininginitsrowsabasisparalleltotheaffine space. One way to extend this example to handle chains is by encoding each linear segmentseparately. Suchencodingwillrequire mKunitsinadditiontounitsthatuseRELUtoseparateeachsegment from the rest of the segments. A related representation was used, e.g., in ( Shaham et al. ,2015). Belowwe show that monotonic chains can be encoded much more efficiently.We next show how to construct the network (i.e., set the weights in A,a0, andB) to encode mono-tonic chains. Below we use the notation A(k)to denote the matrix formed by the first krowsofA,a0(k)is the vector containing the first kentries of a0, andB(k)the matrix including the first kcolumnsofB. Therefore B(k)[A(k)x+a0(k)]+will express the output of the network when onlythefirst khiddenunitsareused. ThesewillbesettorecovertheintrinsiccoordinatesofpointsinthefirstksegmentsinC;RELUensuresthatsubsequenthiddenunitsdonotaffecttheoutputforpointsin these segments.Fortheconstructionweconsiderthepull-backofthestandardbasisof Rmontothechain,producinga geodesic basis to the manifold. Note that to produce a local basis for the intrinsic coordinates of3Published as aconferencepaper at ICLR 2017points on the manifold, we only need a basis for each linear segment. This basis is expressed bya collection of d×mcolumn-orthogonal matrices X(1), X(2), ..., X(K). Each matrix provides anorthogonal basis for one of thesegments.We will construct the network inductively. Suppose k= 1. We set A(1)=X(1)T,B(1)=I,and set a0(1)so that for all x∈Call the components of A(1)x+a0(1)are non-negative. Clearly,B(1)A(1)=X(1)Tis an orthogonal projection matrix and B(1)A(1)X(1)=I. This shows that thenetwork projects the orthonormal basis for the first segment into I, an orthonormal basis in Rm.Next we will show that B(k)A(k)X(k)=Ifor all k. This implies that B(k)A(k)x=X(k)Tx, sothereisnodistortionintheprojection. Thiswillshowthatthenetworkextendsthisbasisthroughoutthe monotonic chain in a consistent way.Suppose we used m+k−2units to construct A(k−1),a0(k−1), andB(k−1)for the first k−1≥1segments. (For notational convenience we will next omit the superscript k−1for these matricesand vectors, so A=A(k−1), etc.) We will now use those to construct A(k),a0(k), andB(k). We doso by adding a node to the first hidden layer. The weights on the incoming edges to this node willbe encoded by appending a row vector aT∈RdtoAand a scalar a0toa0, and the weights on theoutgoing edges will be encoded by appending a column vector b∈RmtoB. Our aim is to assignvalues to thesevectors and scalarto extend theembeddingto Sk.By induction we assume that any ̃x∈S1∪...∪Sk−1isembedded with no distortion to Rmby ̃u=B[A ̃x+a0]+,and that BAX =I. By monotonicity we further assume that Sk−1∩Skism−1dimensionaland there exists a hyperplane Hwith normal h∈Rdthat contains this intersection with C−(S1∪...∪Sk−1)lyingcompletelyonthesideof Hinthedirectionof h,while S1∪...∪Sk−1liesontheoppositesideof H. Wethenset a=handset a0sothataTˉx+a0= 0foranypoint ˉx∈Sk−1∩Sk.(This is well defined since his orthogonal to Sk−1∩Sk.)To determine b, we first rotate the bases X(k−1)(referred to as Xbelow) and X(k)by a com-mon,m×mmatrix R, i.e., Y=XRandY(k)=X(k)Rso that Y= [w,y2, ...,ym]andY(k)= [v,y2, ...,ym]withy2, ...,ymproviding an orthogonal basis parallel to Sk−1∩Sk. (Thisis equivalent to rotating the coordinate system in the embedded space and then pulling-back to themanifold.) Note that by the induction assumption BAY RT=I. We next aim to set bso thatB(k)A(k)X(k)=I. Wenote thatB(k)A(k)X(k)=B(k)A(k)Y(k)RT= (BA+baT)Y(k)RT.We aim to set bso that (BA+baT)Y(k)RT=I=BAY RT. Consider this equality first forthe common columns y2, ...,ymofYandY(k). These columns are parallel to Sk−1∩Sk, so thataTyj= 0for2≤j≤m, implying equality for any choice of b. Consider next the left-mostcolumn of YandY(k), denotedrespectively wandv, we get(BA+baT)v=BAw.This issatisfied if we setb=1aTvBA(w−v).We have constructed bso that the segments are embedded with consistent orientations. In Ap-pendixAwe show that they are also translated properly by a0, to create a continuous embedding.Note that by construction aTy+a0≤0for ally∈S1∪...∪Sk−1so RELU ensures that theembeddingof the these segments will not beaffected by the additional unit.Finally, we note that the proposed representation of monotonic chains with a neural network isvery efficient and uses only a few parameters beyond the degrees of freedom needed to define suchchains. In particular, the definition of a chain requires specifying mbasis vectors in Rdfor onelinear segment (exploiting orthonormality these require m(d−(m+ 1)/2)parameters), with eachadditional segment specified by a 1D direction for the new segment (a unit vector in Rdspecifiedbyd−m−1parameters) and a direction in the previous segment to be replaced (specified by aunit vector in Rm, i.e.m−1parameters). The total number of degrees of freedom of a chain istherefore N=m(d−(m+ 1)/2) + (K−1)(d−2). Thisisthenumberofparametersrequiredto4Published as aconferencepaper at ICLR 2017specify a monotonic chain. Our construction requires N/√rime= (K+m+ 1)(d+m+ 1)parameters.Specifically, note that for any choice of parameters K, d, m > 0,N≥(K+m−1)(d−m−2).Wetherefore obtain thatN/√rimeN≤/parenleftbigg1 +2K+m−1/parenrightbigg/parenleftbigg1 +2m+ 3d−m−2/parenrightbigg.Assuming d, K+m >> 1we getN/√rimeN/lessorapproxeql1 +2md−m.Since we normally expect that the dimension of the input space will be much greater than the di-mension of the manifold, this ratio will be close to 1.4 ERRORANALYSISWe now consider points that do not lie exactly on the monotonic chain, due to noise, or becausewe are approximating a non-linear manifold with piece-wise linear segments. Let p0be a point onthe segment Sjthat is then perturbed by some small noise vector, δ, that is perpendicular to Sj, toproduce the point p=p0+δ. Ideally, the network would represent pusing the coordinates of p0.In effect, the network would project all points onto the monotonic chain. If the network embeds pandp0withcoordinates ˆpandˆp0wedefinethe relativeerror oftheembeddingas/bardblˆp−ˆp0/bardblδ. Wenowanalyze this relative error. Our analysis assumes that /bardblδ/bardblis small enough that pandp0lie in thesameregionso thatthey are bothon thesame sideofall hyperplanesdefined by the hidden units.Wenotethatgivensufficientdatathatliesonthemanifold,itispossibletolearnlocallinearprojec-tions of the manifold that will embed it with zero relative error. This can be done with traditionalmanifold learning methods or by neural networks that contain a sufficiently large number of units.Zhang & Zha (2004) provides an error analysis that shows how the error of their approach dependson the noisiness and number of points in the training data, and the magnitude of the difference be-tween the manifold and its linear approximation. Our contribution here is to analyze the error thatcan occur whena networklearns the embeddingvery efficientlyusing a small number of units.InAppendix Bweshowthatintheworstcase,therelativeerroroftheembeddingcanbeunbounded.This occurs when the monotonic chain has very high curvature, so that a separating hyperplane hasto be nearly parallel to the segment that follows it. In this section we show that for more typicalcases, the relative error will bea smallconstant.We will consider a class of monotonic chains in which the total curvature between all segments isless than or equal to some angle T, and in each separating hyperplane is not too close to parallel tothe next segment. We denote the angle between Sk−1andSkasθk−1. (This angle is well definedsinceSk−1andSkintersect in an m−1-dimensional affine space.) As before, we will drop thesubscript when it is k−1, and just write θ. Specifically, we define θso that cosθ=vTw(wherevandware defined as in Sec. 3, as vectors perpendicular to Sk−1∩Sk, and parallel to Sk−1andSk, respectively), defining θksimilarly for any k. We then express our constraint on the curvatureas/summationtextK−1k=1|θk|≤T.Nowlet cbeaconstantsuchthatwecanbound aTv≥1/cforany k−1.cisaboundonthecosineof the angle between the normal to a separating hyperplane and a vector in the direction of the nextsegment. To understand this, recall that ais a unit vector normal to the hyperplane separating Sk−1andSk. Bysayingthisboundholdsforall k−1,wemeanthatweareabletochoosethehyperplanesthatdividethechainintosegmentssothattheanglebetweenthenormaltoeachhyperplaneandthefollowingsegment is nottoobig. We next bound theerror in termsof cand/bardblδ/bardbl.Letp=p0+δbe as in the last section. We define the embedding error of pbyE(p) =/parenleftbigB(k)A(k)−X(k)T/parenrightbigp, where X(k)denotes the orthogonal projection to Sk, as in Sec. 3. Not-ing that, by the construction of our network, B(k)A(k)p0=X(k)Tp0(sincep0is onSk) and thatX(k)Tδ= 0(duetotheorthonormalityof X(k)),weobtain E(p) =B(k)A(k)δ. Themagnitudeofthe error therefore is scaledat most by the maximal singular value of B(k)A(k),denoted σk.Tobound σkwenotethat B(k)A(k)=BA+baTfork≥2(where,asbefore,wedropsuperscriptssothat Bdenotes B(k−1)). Therefore, σk≤σk−1+|aTb|, where σk−1denotesthelargestsingular5Published as aconferencepaper at ICLR 2017Figure 3:This plot shows the error in flattening the Swiss Roll. Relativeerrorisconstantineverysegment,startingfromzeroforeachmonotonicchainand increasing with each segment. The absolute error (for display purposesthis error is normalized by the maximal distance from the Swiss Roll to itslinearapproximation)behavessimilarly,butvanishesattheendpointsofeachsegmentwhere theSwiss Roll and its linear approximation coincide.value of BA. Recall that/bardbla/bardbl= 1andb=1aTvBA(w−v). Note that w−v≤θk−1. Therefore,|aTb|≤cσk−1θk−1,from which we conclude that σk≤σk−1(1 +cθk−1).Finally, note that B(1)A(1)=X(1)T, implying that σ1= 1. We therefore obtain σk≤/producttextk−1j=1(1 +cθj). Note that/summationtextk−1j=1θj≤Tand so/producttextk−1j=1(1 +cθj)≤(1 +cTk−1)k−1. Therefore,σk≤/parenleftBig1 +cTk−1/parenrightBigk−1≤ecT. We conclude that /bardblE(p0+δ)/bardbl≤ecT/bardblδ/bardbl.Many segments of many monotonic chains can be divided using hyperplanes in which cis not toobig,andmaybeaslowas1. Forsuchmanifolds,whenapointisperturbedawayfromthemanifold,its coordinates will not be changed by more than the magnitude of the perturbation times a smallconstant factor. For example, if T=π/4andc= 1thenek≤eπ4≈2.19. Note that rather thanbeginningatthestartofthemonotonicchain,wecould”begin”inthemiddle,andworkourwayout.That is, provide an orthonormal basis for the middle segment and add hidden units to represent thechain from the central segment toward either ends of the chain. This can reduce the total curvaturefromthestartingpointtoeitherendbyuptohalf. Wefurtheremphasizethatthisboundisnottight.Weconcludethissectionbyshowingtheerrorobtainedinusingourconstructioninthe”SwissRoll”example. To represent this data we use hidden units and their corresponding hyperplanes to dividethe Roll into three monotonic chains (see Section 5below for further details). We then divide eachchain into segments, obtaining a total of 14 segments. Figure 1shows the points that are input intothenetwork,andthe2Drepresentationthatthenetworkoutputs. Thepointsarecolorcodedtoallowthereadertoidentifycorrespondingpoints. InFigure 3wefurtherplottheabsoluteandrelativeerrorin embedding every point of the Swiss Roll due to the linear approximation used by the network.One can see that the Swiss Roll is unrolled almost perfectly. In fact, despite the relatively largeangular extent of each monotonic chain (the three chains range between 126 to 166.5 degrees eachin total curvature), the relative error does not exceed 2.5. (In fact, our bound for this case is veryloose, amounting to 18.3 for 166.5◦.) The mean relative error is 0.98, indicating that the magnitudeof the error is approximately the same as the distance of points to the approximating monotonicchains.5 COMBINATIONS OFMONOTONICCHAINSTo handle non-monotonic chains and more general piecewise linear manifolds that can be flattenedwe show that we can use a network to divide the manifold into monotonic chains, embed eachof these separately, and then stitch these embeddings together. Suppose we wish to flatten a non-monotonic chain that can be divided into Lmonotonic chains, M1, M2, ...ML. Let Al,a0landBldenote the matrices and bias used to represent the hidden units that flatten Ml, which has Klsegments. We suppose that a set of Jlhyperplanes (that is, a convex polytope) can be found thatseparate Mlfrom the other chains. Let Nldenote a matrix in which the rows represent the normalsto these hyperplanes, oriented to point away from Ml. We can concatenate these vertically, lettingA/√rimel= [Al;Nl].We next let Υ =−n1m×Jlwhere1m×Jldenotes an m×Jlmatrix containing allonesand nisaverylargeconstant. Notethat Blhasmrows. Sowecandefine B/√rimel= [Bl,Υ],wherethe matricesare concatenated horizontally.Wenownotethatif u=B/√rimel[A/√rimelx+a0l]+then when xlieson Ml,uwillcontainthecoordinatesofxembedded inRm, as before. When xlies on a different monotonic chain, uwill be a vector withvery smallnegativenumbers. Applying RELU will thereforeeliminate these numbers.A/√rimelandB/√rimelthereforerepresentamoduleconsistingofatwolayernetworkthatembedsonemonotonicchain inRmwhile producing zero for other chains. We can then stitch these values together. First,6Published as aconferencepaper at ICLR 2017we must rotate and translate each embedded chain so that each chain picks up where the previousone left off. Let Rldenote the rotation of each chain, and let b0ldenote its appropriate translation.Then,foreachchain, theappropriate coordinatesareproducedby[RlB/√rimel[A/√rimelx+a0l]++b0l]+.We can now concatenate these for all chains to produce the final network. We let A,a0andb0bethe vertical concatenation of all A/√rimelanda0landb0lrespectively, and let Bbe the block-diagonalconcatenation of all RlB/√rimel. The application of [B[Ax+a0]++b0]+tox∈Mlwill produce avectorwith mLentriesinwhichthe m(l−1)+1, ..., mlentriesgivetheembeddedcoordinatesof xand the rest of the entries are zero. We can now construct a third layer of the network to then stitchthesemonotonicchainstogether. Let Cdenoteamatrixofsize m×mLobtainedbyconcatenatinghorizontally Lidentity matrices of size m×m. Then theoutput of the network is:u=C[B[Ax+a0]++b0]+.Note, for example, that the first element of uis the sum of the first coordinates produced by eachmoduleinthefirsttwolayers. Eachofthesemodulesproducestheappropriatecoordinatesforpointsin onemonotonic chain, whileproducing0 for points in all othermonotonic chains.We note that this summation may result in wrong values if there is overlap between the regions(which will generally be of zero measure). This can be rectified by replacing the summation duetoCby max pooling, which allows overlap of any size. Together, all three layers will require/parenleftbig/summationtextLl=1Jl+m+Kl−1/parenrightbig+ (L+ 1)munits. If the network is fully connected, this requires/parenleftbig/summationtextLl=1Jl+m+Kl−1/parenrightbig(d+Lm) +Lm2weights.Note that the size of this network depends on how many regions are required ( L) and how manyhyperplanes each region needs to separate it from the rest of the manifold ( Ll). In the worst case,this can be quite large. Consider, for example, a 1D manifold that is a polyline that passes througheverypointwithintegercoordinatesin Rd. Toseparateanyportionofthispolylinefromtherestwillrequire regions that are not unbounded, and so Ll=O(d)for all l. We expect that many manifoldscanbedividedappropriatelyusingmanyfewerhyperplanes. Wehaveshownthisfortheexampleofa Swiss rolls (Figure 1).6 EXPERIMENTSUptothispointwehavetheoreticallyanalyzedtherepresentationalcapacityofadeepnetwork. Ourprimary result is to show that data lying on a monotonic chain can be efficiently flattened by a net-workwithtwohiddenlayers,using m+k−1hiddenunitsinthefirstlayer,and munitsinthesecondlayer. An important question is whether real networks trained with stochastic gradient descent canuncover such efficient representations. Inthissectionwe address that question experimentally.We do not expect that a trained network will always produce the constructions developed in thispaper. First,we note that our constructions provide an upper bound; more efficient representationspossible. Sowepredictthat m+k−1orfewerhiddenunitsareneeded. Second,atrainednetworkmay settle in a local minimum, and not produce an efficient embedding, even though one might bepossible. To determine whether a particular architecture can produce a good embedding, we trainnetworks with multiplerandom starting points, and select the solutions that producevery low error.Figure4:Thisgraphshowserrorintheembeddingproducedbyatrainednetwork. Each curve represents a manifold of different dimension, witha different number of segments. Each curve shows how error in the em-bedding on validation points changes as the number of hidden units in-creases. Stars indicate the validation error at the point of each curve inwhich h=m+k−1. As our theory predicts, the error has reached anasymptote closeto zero at these points.Todeterminethenumberofhiddenunitsneededtocreateeffectiveembeddings,wegeneratedataonmonotonicchainsinwhichwevarythedimensionofthemanifold, m,andthenumberofsegments,7Published as aconferencepaper at ICLR 2017k. Anexampleinwhich m= 2andk= 7isshowninFigure 5. Notethatthereissomeskewinthechain, so that none of the dimensions can be trivially embedded by a single linear projection. Wesample 40,000 points on the manifold. We then train a regressor, with a varying number of hiddenunits, using the squared difference between the ground truth distance between pairs of embeddedpoints and the distance computed by the network as a loss function. This simulates non-linearmetric learning. For each condition, we repeat training 15 times, and report the minimum error inthe objective (see Figure 4). We can see that for each curve the error has dropped to an asymptotenearzero when h=m+k−1, justas our theory predicts.In Figure 5we show a typical example produced for a 2D manifold with seven segments, shownin a 3D space. Portions of hyperplanes correspond to six hidden units. This solution resemblesour constructions in several ways. One hyperplane is active over the entire chain, while the otherhyperplanes intersect the manifold at the intersection of consecutive segments. The solution differsfromourconstructioninthatsomehyperplanesareusedtohandletwosegmentsofthemanifold;itisevenmoreefficientthanourconstruction. Andtwohyperplanes,atthetop,intersectthemanifoldinthe same location. These hidden units have weights with opposite signs, producing positive outputsfor different segments. For reasons of space and simplicity, we do not discuss these constructionstheoretically, but it is straightforward to show that they can also produce efficient embeddings.Figure 5:A network trained with m= 2, k= 7, h= 6. Left: Colored dotsrepresent points on the manifold. Their ground truth coordinates are encodedby the size and hue of the dots. Colored rectangles represent the hyperplanesassociated with the six hidden units. Right: We show the labels generated foreachpoint,in2D.Pointsarecoloredtoindicatetheirsegment. Theembeddingisnear perfect.We perform a final experiment to get a sense of whether such embeddings can occur with morerealistic data. We generate images of a face with azimuth ranging from 0 to 50 degrees, and withelevationrangingfrom0to8degrees. Asalossfunction,weuseanL2normbetweenthe moutputunits and the true azimuth and elevation. Because the images have many pixels, and the amountof training data is limited, a fully connected network would overfit the data if we use each pixelas an input dimension. Consequently, we perform PCA before training to reduce the faces to a 3Dspace, which also allows us to visualize the input and resulting network (see Figure 6). We cansee that the data forms an approximately 2D manifold, but that it is much messier than with ourprevious, synthetic data. The resulting embedding captures the azimuth and elevation reasonablywell,butwithsomenoise(eg.,itdoesnotformaperfectgrid). Wecanalsoseethatthehyperplanesassociatedwiththefirsthiddenlayerofthenetworkalsoresembleourconstruction,withindividualunitsperiodicallyintersecting the manifold asit curves.Figure 6:We train a regression network to learn the azimuth and elevation of face images. Left: the faceimages projected to 3D and the hyperplanes learned in the first network layer. Dot size encodes elevation andhue encodesazimuth. Right: We show the images embedded in a 2D space by the trained network.7 DISCUSSIONWe show that deep networks can represent data that lies on a low-dimensional manifold with greatefficiency. Inparticular,whenusingamonotonicchaintoapproximatesomecomponentofthedata,the addition of only a single neural unit can produce a new linear segment to approximate a regionofthedata. Thissuggeststhatdeepnetworksmaybeveryeffectivedevicesforsuchdimensionalityreduction. It also may suggest new architectures for deep networks that encourage this type ofdimensionality reduction.8Published as aconferencepaper at ICLR 2017We also feel that our work makes a larger point about the nature of deep networks. It has beenshown by Montufar et al. (2014) that a deep network can divide the input space into a large numberofregionsinwhichthenetworkcomputespiecewiselinearfunctions. Indeed,thenumberofregionscan be exponential in the number of parameters of the network. While this suggests a source ofgreat power, it also suggests that there are very strong constraints on the set of regions that can beconstructed, and the set of functions that can be computed. Our work shows one way in which asingle hidden unit can control the variation in the linear function that a network computes in twoneighboring regions; it can shape this function to follow amanifoldthatcontains thedata.ACKNOWLEDGEMENTSThis research is based upon work supported by the Office of the Director of National Intelligence(ODNI),IntelligenceAdvancedResearchProjectsActivity(IARPA),viaIARPAR&DContractNo.2014-14071600012. Theviewsandconclusionscontainedhereinarethoseoftheauthorsandshouldnot be interpreted as necessarily representing the official policies or endorsements, either expressedor implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized toreproduceanddistributereprintsforGovernmentalpurposesnotwithstandinganycopyrightannota-tionthereon.This research is also based upon work supported by the Israel Binational Science Foundation GrantNo. 2010331 andIsraelScience Foundation GrantsNo. 1265/14.Theauthors thank Angjoo Kanazawa and ShaharKovalsky for their helpful comments.A CONTINUITY OF EMBEDDINGInSection3ofourpaperwedefinedtheweightmatrices A(k)andB(k)andthebiasvector a0(k)thatmapaninputvector xtoitsgeodesiccoordinatesonthemanifold. Weshowedthatthisconstructionindeed maps points on Skto their geodesic coordinates, so that this coordinate system is consistentin orientation with the coordinates assigned to the previous segments S1, ...,Sk−1. It is now left toshow that the bias a0(k)is chosenproperly to create a continuous embedding.Consider a point x∈Sk. Denote by ˉxits projection onto Sk−1∩Sk, so that x=ˉx+βvfor ascalar β. Denotingthe embedded coordinates of xbyu,u=B(k)(A(k)x+a0(k)).Wewantto verify thatas βtends to 0 uwill coincide with the embeddingof ˉxdue to Sk−1,i.e.,ˉu=B(Aˉx+a0).In our construction, B(k)is obtained from Bby appending the column vector bto its right side,andA(k)is obtained from Aby appending the row vector aTto its bottom, so that B(k)A(k)=BA+baT. Recall further that a0(k)is obtained from a0by appending the scalar a0at its end. Wetherefore obtainu= (BA+baT)x+Ba0+a0b.Replacing x=ˉx+βvweobtainu= (BA+baT)ˉx+β(BA+baT)v+Ba0+a0b.Sincea=h,aTˉx+ao= 0and we getu=B(Aˉx+a0) +β(BA+baT)v,which coincides with ˉuwhenβ→0, implying that the embedding is extended continuously to Sk.Note that by construction aTy+a0≤0for ally∈S1∪...∪Sk−1so RELU ensures that theembeddingof these segmentswill not be affectedby the additional unit.9Published as aconferencepaper at ICLR 2017B WORST-CASE ERRORInthissectionweshowthattheerrorobtainedwhileembeddingnoisypointsusingourconstructioncan in principle be unbounded. As we show below, this happens when we are forced to choosehyperplanes that are almost parallel to the segments they represent. In contrast, Section 4.1 of ourpaper shows that wecan boundtheerror inmany reasonable scenarios.To show that the error can be unbounded, we consider a simple case in which the piecewise linearmanifoldconsistsofthreeconnected1Dlinesegments, S1, S2andS3,with2Dverticesrespectivelyof(0,0)and(N,0),(N,0)and(N, /epsilon), and(N, /epsilon)and(0, /epsilon).Nis very large, and /epsilonis very small(see Figure 7). Since three segments compose a 1D manifold, three hidden units defining threehyperplanes, H1, H2andH3(lines) will be needed to represent the manifold. In addition, a singleoutputunitwillsumtheresultsoftheseunitstoproducethegeodesicdistancefromtheorigintoanypoint onthethree segments.Figure 7:In black, we show a 1D monotonic chain with three segments. In red, we show three hidden unitsthat flatten this chain into a line. Note that each hidden unit corresponds to a hyperplane (in this case, a line)that separates the segments into two connected components. The third hyperplane must be almost parallel tothe third segment. This leadsto large errors for noisy points near S3.UsingourconstructioninSection3ofthepaperwegettheembedding f(p) =B[Ap+a0]+withB=/parenleftbigg1,1q2,−1r1/parenleftbigg2 +q1q2/parenrightbigg/parenrightbigg, A=/parenleftBigg1 0q1q2r1r2/parenrightBigg,a0=/parenleftBigg0q3r3/parenrightBigg.Note that the first row of Auses the standard orthogonal projection (x, y)→x; the two other rowsofAanda0separate the three segments with (1) q1, q2>0andq1/q2≤/epsilon/Nandq3=−q1Nset so that the separator H2goes through (N,0), and (2) r1<0,r2>0andr1/r2≥−/epsilon/N, andr3=−r1N−r2/epsilonset so that the separator H3goes through (N, /epsilon). It can be easily verified that inthis setup points on the first segment (x,0),0≤x≤Nare mapped to x, points (N, y),0≤y≤/epsilononthesecondsegmentaremappedto N+y,andpoints (x, /epsilon),0≤x≤Nonthethirdsegmentaremappedto N+/epsilon+ (N−x).Ideally, we would want pto be embedded to the same point as p0. LetE(p) =f(p)−f(p0).Clearly E(p) =B(k)A(k)δ. It can be readily verified that, under these conditions, when p0∈S1thenE(p) = 0; when p0∈S2thenE(p) = (1 + q1/q2)δ, and when p0∈S3thenE(p) =(1−(r2/r1)(2 + q2/q1))δ. Therefore, there is no error in embedding pforp0∈S1. The error inembedding pwithp0∈S2is small and bounded (since q1/q2≤/epsilon/N, assuming /epsilonis small and Nis large), while the error in embedding pwhenp0∈S3can be huge since−r2/r1≥N//epsilon. In thenext section we show that this can only happen when there is a large angle between a segment andthe normalto the previous separating hyperplane.C CLASSIFICATIONInexperimentsinthebodyofthispaperwehavedemonstratedthatthetheoreticalconstructionsthatweanalyzecanarisewhennetworksaretrainedtosolveregressionproblemsthatmappointsonthemanifold to their low-dimensional embeddings. An interesting question is whether similar embed-dings may be learned by a network that is trained to classify points that lie on a low-dimensionalmanifoldwhenitismoreefficienttorepresenttheboundariesoftheseclassesintheembeddedspacethan it is in the ambient space. In this Appendix, we describe some very preliminary experimentsthataddress this question.10Published as aconferencepaper at ICLR 2017First we note that the embeddings that arise in solving classification problems may be much lessconstrained and therefore more complex than those that arise in regression problems. The regres-sionlossfunctiondirectsthenetworktolearntheknown,groundtruthcoordinatesoftheembeddedmanifold. Only an isometric unfolding of the manifold will satisfy this condition. While this iso-metricembeddingwillfacilitateclassificationaswell,theremaybemanynon-isometricunfoldingsthatwill beequally useful inclassification.Asasimpleexampleofthis,supposeamonotonicchaincontainstwoclassesthatarelinearlysepara-ble,oncethechainisisometricallyembeddedinalow-dimensionalspace. Ifinsteadofanisometricembedding,weallowarelatedembeddinginwhicheachsegmentofthechainundergoesadifferentlinear transformation that stretches it in the direction of the linear separator, or orthogonal to theseparator, theclasseswill still belinearly separable in thetransformed,non-isometric embedding.As another example, no mapping of the manifold to a low-dimensional space will allow for correctclassification if it maps two points from different classes to the same point in the low-dimensionalspace. However, classification may not be affected if two points from the same class are mappedto the same point. So when points from only one class appear near the boundary between twosegments, a network may learn a mapping in which the points from two segments overlap in thelow-dimensionalspace.Itisanopenandrathercomplexproblemtodeterminewhichmappingsoftheinputtolow-dimensionmay be suitable for classification of a particular set of labeled points. However, we stress that themainpointofourpaperistoshowthatwhenisometricembeddingscanbeusedtosolveaproblem,a deep network can efficiently represent such embeddings. It is certainly possible that the networkcan alsoefficiently findalternate embeddings thatare equallyuseful.Bearing this in mind, we have designed some simple classification tasks and examined the em-beddings that they give rise to in a neural network. We stress that these experiments are quitepreliminary,and should be taken as intriguingexamples thatcanhelpmotivate future work.In our experiments we created monotonic chains with seven segments, similar to those used in ourearlier experiments. We generate 20,000 points that lie on each chain. To label these points withclasses,weunfoldedthechainandintersecteditwithseverallines,varyingthenumber. Theselinesform an arrangement on the 2D unfolded manifold; we labeled each region of the arrangement,which is a convex polygon, as a separate class. We did this randomly, selecting arrangements inwhich classes tended to span multiple segments.We then trained a network to perform classification. After the input layer, the next layer containedbetween five and eight hidden units. This was followed by a layer containing two hidden units.This was followed by another layer with 10-30 units, and an output layer with a unit for each class.Relu was used between layers, with softmax for the loss function. The layer containing two unitsessentially represents a two-dimensional embedding of the input. The previous layer could be usedto represent the constructions developed in this paper, while the subsequent layer can be used toclassify the data in the low-dimensional space. This architecture allows us to easily extract theembeddingthatthe network has learned.Figure8shows a typical example of the results. On the left we plot the input points, color coded toindicatetheirclass. Ontheright,weploteachpointatitsembeddedlocation,colorcodedtoindicatetowhichsegmentitbelongs. Theembeddingpreservestheorderandcontinuityofthesegments. Inseveralcaseseachsegmenthasbeenapproximatelytransformedbyadifferentlineartransformation.In the case of the red and green colored segments on the right, there is some overlap. Looking attheleft-handfigurewecanseethatinthiscase,pointsneartheboundarybetweenthetwosegmentsbelong to the same class. So this folding over of the segments in the embedding does not interferewith the network’sability tocorrectly classify the points.In general, this embedding meets our expectations, showing that the monotonic chain can be veryefficiently mapped to a low-dimensional space using very few units, in a way that enables accurateclassification. It would be interesting in future work to determine the class of mappings that can beinstantiated efficiently by a network, and to understand how these relate to different classificationproblems. It would also be interesting to design classification problems that can only be solvedusing isometric embeddings, and to determine whether these embeddings can be found by neuralnetworks.11Published as aconferencepaper at ICLR 2017Figure8:Wetrainanetworkonaclassificationprobleminwhichthepointslieonalow-dimensionalmanifold.We show the points on the left, color coded to indicate their class. We then extract the embedding learned bythe network. Here we show the input mapped to this embedding, with points colored to indicate which of theseven segments of the monotonic chain they lie on.D DEEPERNETWORKSWe also note that the previously developed constructions can be applied recursively, producing adeeper network that progressively approximates data using linear subspaces of decreasing dimen-sion. That is, we may first divide the data into a set of segments that each lie in a low dimensionalsubspace whose dimension is higher than the intrinsic dimension of the data. Then we may subdi-vide each segment into a set of subsegments of lower dimension, using a similar construction, anddeeperlayersofthenetwork. Thesesubsegmentsmayrepresenttheoriginaldata,ortheybefurthersubdivided by additionallayers, until we ultimately produce subsegmentsthatrepresentthe data.We first illustrate this hierarchical approach with a simple example that requires only one extralayer in the hierarchy. Consider a monotonic chain of K,m2-dimensional linear segments thatcollectively lie in a m1-dimensional linear subspace, L, of ad-dimensional space, with m2< m1.Wecanconstructthefirsthiddenlayerwith m1unitsthatareactiveovertheentiremonotonicchain,sothattheirgradientdirectionsformanorthonormalbasisfor L. Theoutputofthislayerwillcontainthe coordinates inLof points on the monotonic chain. These can form the input to two layers thatthen flattenthe chain, as described in Section 3.In Section 3we had already shown how to flatten the manifold with two layers that take their inputdirectly from the input space. Here we accomplish the same end with an extra layer. However, thisconstruction,whileusingmorelayers,mayalsousefewerparameters. TheconstructioninSection 3required d(m2+K−1)parameters. Ournewconstructionwillrequire dm1+m1(m2+K−1)pa-rameters. Notethatas Kincreases,thenumberofparametersusedinthefirstconstructionincreasesin proportion to d, while in the second construction the parameters increase only in proportion tom1. Consequently, the second construction can be much more economical when Kis large and m1issmall.In much the same way, we could represent a manifold using a hierarchy of chains. The first layerscanmapa m1-dimensionalchaintoalinear m1-dimensionaloutputspace. Thenextlayerscanselectanm2-dimensional chain that lies in this m1-dimensional space, and map it to an m2-dimensionalspace. Thisprocesscanrepeatindefinitely,butwhetheritiseconomicalwilldependonthestructureof the manifold.REFERENCESJ. Ba and R. Caruana. Do deepnets really need to be deep? In NIPS,pp. 2654–2662,2014.R. Basri and D. W. Jacobs. Lambertianreflectance and linear subspaces. PAMI,25(2):218–233, 2003.M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neuralcomputation , 15(6):1373–1396, 2003.12Published as aconferencepaper at ICLR 2017M.BianchiniandF.Scarselli. Onthecomplexityofneuralnetworkclassifiers: Acomparisonbetweenshallowand deep architectures. IEEE Trans. on Neural Networksand LearningSystems , 25(8), 2014.P. P. Brahma, D. Wu, and Y. She. Whydeep learning works: Amanifold disentanglementperspective. 2015.S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to faceverification. In CVPR, 2005.C. K. Chui and H. N.Mhaskar. Deep nets for localmanifold learning. ArXiv preprint:1607.07110 ,2016.N. Cohen, O. Sharir,and A. Shashua. On the expressive power ofdeep learning: A tensoranalysis, 2015.G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals andsystems, 2(4):303–314,1989.O. Delalleauand Y. Bengio. Shallow vs. deep sum-product networks. In NIPS,pp. 666674, 2011.R. O. Duda, P. E. Hart,and D. G. Stork. Pattern classification . John Wiley & Sons, 2012.R. Eldan and O. Shamir. The power of depth for feedforward neural networks. ArXiv preprint: 1512.03965 ,2015.R. Giryes, G. Sapiro, and A. M. Bronstein. Deep neural networks with random gaussian weights: A universalclassification strategy? ArXiv preprint: 1504.08291 , 2016.R. Huang, F. Lang, and C. Shu. Nonlinear metric learning with deep convolutional neural network for faceverification. In J. et al. Yang (ed.), Biometric Recognition , volume 9428 of Lecture Notes in ComputerScience, pp.78–87.Springer, 2015.K. C. Lee, J. Ho, M. H. Yang, and D. Kriegman. Video-based face recognition using probabilistic appearancemanifolds. In CVPR, volume 1, pp. I–313. IEEE,2003.H. Mobahi, J. Weston, and R. Collobert. Deep learning from temporalcoherence in video. In ICML,2009.G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks.InNIPS, pp. 2924–2932,2014.B.Nadler,S.Lafon,R.R.Coifman,andI.G.Kevrekidis.Diffusionmaps,spectralclusteringandeigenfunctionsoffokker-planck operators. In NIPS, volume 18, 2005.S. Chopra R. Hadsell and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR,2006.S. Rifai, Y. N. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In NIPS, pp.2294–2302, 2011.S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.R.SalakhutdinovandG.Hinton. Learninganonlinearembeddingbypreservingclassneighbourhoodstructure.InAISTATS, 2007.U. Shaham, A. Cloninger, and R. R. Coifman. Provable approximation properties for deep neural networks.ArXiv preprint: 1509.07385 ,2015.M.Telgarsky. Representationbenefits of deepfeedforward networks. ArXiv preprint: 1509.08101 ,2015.J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionalityreduction. Science, 290:23192323, 2000.M.Turk andA. Pentland. Eigenfaces for recognition. Journalof cognitive neuroscience , 3(1):71–86,1991.J. Weston,F.Ratle, and R. Collobert. Deep learningvia semi-supervised embedding. In ICML, 2008.D. Yi, Z. Lei, S. Liao, and S. Z.Li. Deep metric learningfor personre-identification. In ICPR, 2014.Zhen-yue Zhang and Hong-yuan Zha. Principal manifolds and nonlinear dimensionality reduction via tangentspace alignment. Journal of Shanghai University(English Edition) , 8(4):406–424, 2004.13